title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Modeling Human Visual Motion Processing with Trainable Motion Energy Sensing and a Self-attention Network | Accept (poster) | Summary: In this paper, the author tries to build up a visual system like human eyes. In this study, the authors propose a two-stage model that combines trainable motion energy sensing with a recurrent self-attention network to capture the computations in V1-MT, the core structure for motion perception in the biological visual system. The model's unit responses are similar to mammalian neural recordings regarding motion pooling and speed tuning and replicate human responses to various stimuli. The model outperforms several state-of-the-art computer vision models in explaining human responses that deviate from the ground truth.
Strengths: 1. This paper has a deep exploration of the human visual system based on on-the-shelf computer vision modules.The idea of using two stage framework to build up this system is insightful.
2. The design of this paper is intuitive and this framework achieves reasonable results.
Weaknesses: 1. In this paper, the authors do not provide much visual evidence to prove their system.
2. In some experiments, their method is still worse than state-of-the-art
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: None
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please refer to weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We genuinely appreciate the time and effort of the reviewer. We respond to the concerns pointed out by the reviewer as follows.
----
**Do not provide much visual evidence to prove their system**:
We entirely agree that for a topic as intricate as motion perception, dynamic visual demonstrations would offer a much more comprehensive and understandable insight than text alone. Unfortunately, due to page constraints in the conference proceedings, it was impractical to provide a thorough visual display of the main content of the paper. To supplement this, we have provided a wealth of visual evidence in the supplementary materials. These include multiple demonstration videos and the results produced by our model. We've added hyperlinks for your convenience, and we trust that these materials will give a more in-depth understanding of our work and its significance.
**In some experiments, their method is still worse than state-of-the-art**:
We recognize that our model's optical flow prediction performance may not be as high as other SOTA models. However, it's important to note that our main goal is to model human motion perception. Therefore, we prioritize biological plausibility over pure performance.
The SOTA models are intentionally designed to achieve the best performance in matching ground truth (GT) data. In contrast, we incorporate additional constraints on our current models to trade off biological plausibility, such as motion energy computation. Hence, it is not appropriate to rely solely on a simple correlation to GT as an index for comparing the capabilities of our model with other SOTA models.
Regarding the comparison with SOTA models, our model is comparable to SOTA CV models when the object is the human response. Table I indicate that some SOTA models might demonstrate a higher correlation with human responses. However, as another study [ref] points out, there's a substantial inherent correlation between the ground truth and human responses. As a result, any model trained to fit the ground truth must consider if the high correlation with human responses is merely a result of fitting the ground truth well. This led us to employ a partial correlation metric, which evaluates the correlation between the model and human responses while controlling for the impact of the ground truth. More intuitively, we exclude the covariance between GT and the model prediction as well as between the GT and human response. Therefore, the partial correlation we used is only related to how strong model prediction could explain the variance in human response, which turns out to be a pure correlation between humans and the model. We believe this offers a more accurate reflection of the model's performance on human response.
When we refer to the partial correlation, our model outperforms the SOTA models in Table I. The likely reason is that the current SOTA models in the computer vision field are primarily designed to match the ground truth as closely as possible, which is not the objective of this study. Instead, we aim to consider a wider range of factors associated with the human visual motion system when developing the model. The data in Table I substantiates that when we eliminate the effect of the Ground Truth, our model tends to align more closely with human responses in complex natural scenes.
---
We trust these clarifications address the reviewer's concerns and offer a more transparent understanding of our work.
Thanks again for the reviewer's constructive contribution to our work.
Best,
Authors
[ref]
- Yang, Y. H., Fukiage, T., Sun, Z., & Nishida, S. Y. (2023). Psychophysical measurement of perceived motion flow of naturalistic scenes. Available at SSRN 4414877 | Summary: This paper proposes an image-computable model of human motion perception, bridging the gap between biological computation and CV models. The proposed model contains a two-stage approach that combines trainable motion energy sensing with a recurrent self-attention network for adaptive motion integration and separation. The similarity of the proposed model to human visual motion processing is demonstrated by computer neurophysiology experiments and psychophysics experiments.
Strengths: S1. This study applies DNNs to construct a model of human motion perception that extracts informative motion flows for a wide range of inputs. Specifically, a two-stage model is proposed that simply mimics mammalian V1 and MT functions, respectively.
S2. The two-stage model’s neurons exhibit direction and speed tunings similar to those observed in mammalian physiological recordings in V1 and MT.
S3. Human-like responses are generated to several traditional motion stimuli and illusions, including the global motion pool and the barbershop illusion, showing good generalization from texture-free stimuli (e.g., drifting Garbo) to complex natural scenes.
S4. This paper is clear and well-written overall.
Weaknesses: W1: Although the primary purpose of this paper is to better model human motion perception, there is a gap between the two-stage model and state-of-the-art models in the CV field regarding the optical flow prediction performance, which needs more explanations/analyses.
W2: The proposed two-stage model is relatively complicated, but this study lacks complexity analysis for each part. Considering the low energy consumption property of the human brain, energy efficiency may be another advantage of this model compared with other SOTA models.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the above statements and the following questions.
Q1: As this study just uses cosine similarity, is the phrase “self-attention” suitable? Maybe it’s better to use a term from biological computation mechanisms.
Q2: The meaning of some items in Equation 3 is not clear.
Q3: What’s the origin of F in Fig.1E? Is it equal to E? Because the whole procedure is quite complicated, it needs clearer explanations/descriptions.
Q4: What’s the meaning of “Stage II-1/2/3/4” in Fig.3 or Fig.5?
Q5: Mixed usage of “two-stage”, “two-stages” and “two stages”. Please be consistent.
Q6: In Table 1 caption, “speed, and direction” should be “direction, and speed”?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper has discussed its limitations in modeling human motion perception and its potential influence on future research. Maybe it’s better to add some limitations regarding to its performance on optical flow prediction, compared to other SOTA models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for the reviewer's comprehensive comments and for recognizing the value of our work.
---
**The gap between our two-stage model and SOTA models in the CV field regarding optical flow prediction performance:**
We recognize that our model's optical flow prediction performance may not be as high as other SOTA models. However, it's important to note that our primary goal is to model human motion perception. Therefore, we need to trade off biological plausibility over pure performance.
Our model integrates classical motion energy computation with a graph connection and attention mechanism to solve dense optical flow in real-world scenarios effectively. In the design stage, using many stacked CNN layers instead of motion energy computation might enhance performance but would obscure each layer's specific functionality, deviating from our initial intention. Instead, our model is designed for clarity of function: the first layer extracts local energy, while the second layer manages motion integration and segregation.
CV models are designed to match Ground Truth (GT) data best, but our approach adds extra constraints to maintain a balance of biological plausibility. Therefore, a simple correlation to GT shouldn't be the sole comparison index between our model and other SOTA models.
Regarding Table I, our model is comparable to SOTA CV models when the object is the human response. Some SOTA models demonstrate a higher correlation with human responses. However, as a recent study (Yang et al. 2023) points out, there's a substantial inherent correlation between GT and human responses. As a result, any model trained to fit the ground truth must consider if the high correlation with human responses is merely a result of fitting the ground truth well. This led us to employ a partial correlation metric, which evaluates the correlation between the model and human responses while controlling for the impact of the GT. When evaluated using the partial correlation, our model outperforms the SOTA models with a considerable gap in Table I. Our preliminary analysis suggests that our model best explains human-perceived flow in naturalistic scenes because the attention network can do something similar to vector decomposition (Johansson, 1973).
**The proposed two-stage model is relatively complicated and lacks complexity analysis for each part:**
Our current focus is to make a computational model similar to humans and competitive with SOTA CV models without seriously prioritizing the complexity/efficiency of computation.
The choice of our framework is based on its algorithmic affinity to complex neural computations in the human brain. The two-stage concept is a simplified yet representative version of the real-world structure of the human brain's motion processing system. Considering the complex topology and dynamic temporal characteristics of brain neurons, we have incorporated graph structures and recurrent neural networks to deal with the motion integration tasks, a function of the MT regions. With a total of 14.7 million trainable parameters, our model remains leaner compared to generic CNN backbones.
Our study suggests what kinds of computational mechanisms are necessary to explain human perception. It is possible that the neural implementation for this computation is different and more efficient.
**Questions**:
- Q1: As this study just uses cosine similarity, is the phrase "self-attention" suitable?
Thanks for your suggestion. According to our understanding, the "self-attention/ attention mechanism" in computer vision and deep learning fields is a kind of calculating the similarity between each pair of spatial location/ temporal step across the 2-D feature map / 1-D sequence. Most works, such as current transformer architecture, use dop-product distance as the distance metric. The cosine similarity used in this work is also a mathematical distance metric. Using different distance metrics wouldn't deviate from the same concept of self-attention. The cos similarity can be decomposed into dot-product with a normalization factor, and it could normalize numeric value into [-1,1]. In our experiment, we found that this feature helps the numeric stability of the model when training.
- Q2: The meaning of some items in Equation 3 is not clear.
Apologies for the confusing mathematical symbols. The symbol $\Re$ and $\Im$ denote acquiring the Real and Imaginary parts from a complex value, respectively.
We will replace that with **Im[.]** and **RE[.]** for a better understanding. In addition, $*$ means the convolution operation.
- Q3: What's the origin of F in Fig.1E? Is it equal to E? needs clearer explanations/descriptions.
If we refer to Fig.1 D, the RMIB block corresponds to Fig 1. In the initial recurrent stage, the red dot (F) and blue dot (E) both originate from the same motion energy in Stage I. However, in subsequent recurrent stages, 'F' and 'E' begin to diverge. 'F' represents the guiding attention for global motion integration, whereas 'E' signifies the motion energy response utilized to decode the optical flow.
We acknowledge that this process could use more clarification and will enhance the captions to explain the model's workings better. Furthermore, we found a labeling error in Fig 1 legend (in the lower right corner), where the descriptions of the blue and red dots should be switched. These will be corrected in our revised manuscript.
- Q4: What's the meaning of "Stage II-1/2/3/4" in Fig.3 or Fig.5?
Stage II is a recurrent stage, and we analyze the result of each recurrent iteration. Stage II -1/2/3/4 means the result from iteration 1/2/3…
We will provide details in the revision.
- Q5: Mixed usage of "two-stage", "two-stages" and "two stages".
- Q6: In Table 1 caption, "speed, and direction" should be "direction, and speed"?
Thanks for the suggestion. We will modify it.
---
We hope our responses sufficiently address the reviewer's concerns.
Best,
Authors | Summary: I have read the authors' rebuttal and will maintain my already high rating.
This paper proposes a new model of the dorsal pathway (V1->MT) using a two-stage architecture. The first stage uses spatiotemporal filters tuned by supervised learning, while the second stage uses a dynamic connection between motion detectors based on the similarity of motion responses. The model is able to capture several important aspects of human motion processing, including both neurophysiological properties and psychophysical responses.
Strengths: -The paper is very well written, although there are a few places where it is not clear.
-The model is quite convincing in its fits to the data, both neurophysiological and psychophysical.
-The models fits to data are compared to 8 different SOTA models of optical flow, and when the correlation to the ground truth is factored out, the model is superior to all others (by having a high correlation to the ground truth motion, the models will necessarily have a high correlation with the human data). This is an important test.
-The model can not only account for responses to low-level stimuli (drifting gabors), but can also give convincing responses to natural and artificial movies. I am not an expert in motion processing, but to my knowledge, this is the first model to do this. Perhaps other reviewers will know of other models with this capability.
-The model is a sophisticated one, using a dynamic graph construction that connects the motion detectors in stage 2, integrating the motion detectors in stage 1 based on similarity of response. In this way, the model can capture global motion, solving the aperture problem.
Weaknesses: -The model’s sophistication is also a weakness: The dynamic graph construction assumes all responses can be influenced by all other responses, depending on the response similarity. That is, dynamically, any response integrator can be connected to any other, no matter how far away spatially. This is what allows the network to integrate the local motion into global motion. How this could be implemented neurally is very unclear. However, this is an argument from lack of imagination. Also, the paper acknowledges this limitation.
-Some of the presentation is unclear/unconventional: For example, figure captions don’t label all components of the figure (e.g., the caption to Figure 5 only describes 5D), although they may be discussed in the text. It is not always clear what is being shown in the Figures, which are relatively complex.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. What do the script symbols in Equation 3 represent? They aren’t defined.
Line 67: mode’s -> model’s
123: ability to capture local motion only ->
ability to capture only local motion.
132: process -> processing
bottom of page 4: At this point, it would be great to give an intuitive description of what this adjacency matrix represents. It looks to me like this strongly and globally connects neurons with similar motion responses, which would make intuitive sense, but since it is dynamically computed, it isn’t obvious how this would be implemented neurally.
184: our primary lies -> our primary goal lies
It’s unclear what’s being shown in Figure 3D, especially the graphs in the first row on the right (is this still part of D?). Presumably, lines 227-231 are describing what this is, but I don’t understand the explanation.
line 251 refers to figures 3A and 4A. Are both supposed to be 4A?
Also, panel E should be to the left of panel F.
Figure 4B middle panels. What are these depicting?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Barber pole illusion is not quite captured properly (Figure 4D), although the end points are.
The neural implementation of this model is unclear. This is mentioned as a limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thanks for the reviewer's appreciative comments regarding our work, and I'd like to address the reviewer's concerns as follows.
---
**On the neural implementation of dynamic graph construction**:
We admit that the specific neural implementation of the attention mechanism is still unclear. This is a complex neuroscientific issue and one of the limitations we acknowledge in our paper. However, numerous psychophysical findings suggest the presence of global motion integration capabilities. For instance, human visual motion computation exhibits substantial interactions over vast spatial separations (refer to Maruya, K., Holcombe, A. O., & Nishida, S., 2013). To simulate this, we required a function capable of flexibly integrating spatial information, which is what the attention mechanism and graph topology effectively provide.
As the reviewer correctly pointed out, the current version of our model assumes, dynamically, any response integrator can be connected to any other, no matter how far away spatially. This may sound physiologically plausible. However, we could incorporate position embedding to mirror retinal topological information when calculating the adjacency matrix. Consequently, as illustrated in our supplementary material (Fig. 1.2.3). The visualized connections primarily focus on the local area of the selected regions.
However, considering that human visual motion computation exhibits substantial interactions over 100 deg (see, e.g., Maruya, Holcombe & Nishida, 2013), designing biologically plausible position embedding is not an easy task. It is possible that the neural implementation for this computation is different and more efficient, and we will acknowledge this limitation in the manuscript.
**Addressing presentation clarity**:
We apologize for any confusion stemming from our figures and captions. Our revised manuscript aims to enhance figure clarity and provide more detailed captions to explain each component thoroughly.
**Questions**:
- **Q1: What do the script symbols in Equation 3 represent?**
A1: We apologize for the lack of explanation. The symbols $\Re$ and $\Im$ extract the real and imaginary parts of a complex number, while $*$ denotes convolution operations. We will add these definitions in our revised manuscript for clarity.
- **Q2: some typo errors**
A2: Thank you very much for the careful check and correction. We will correct them in our revised manuscript.
- **Q3: bottom of page 4: give an intuitive description of what this adjacency matrix represents**
A3: Thanks for your suggestion. Intuitively, this adjacency matrix represents the neuron's affinity or connectivity within the space. We will provide a more in-depth explanation in the revised manuscript.
- **Q4. It's unclear what's being shown in Figure 3D, especially the graphs in the first row on the right (is this still part of D?).**
Firstly, considering the clarity, we consider relocating the graphs in the first row to the supplementary material or appendix.
Fig. 3D and its accompanying content are designed to investigate the spectral properties of the units in our model. To accomplish this, we subjected the model to a combination of drifting-Gabor stimuli with varying spatiotemporal frequency components, which we can interpret as testing the spectral receptive field of each unit.
To analyze these receptive fields, we employed a 2D Gaussian profile fitting. This mathematical approach provides key parameters such as the central location and oblique angle of the fitted profile. These parameters allow for a quantitative examination of the spectral field distribution for each neuron. The upper-right section of Fig 3D demonstrates the distribution of these fitted Gaussian profiles. Each small dot corresponds to the central location of a unit's receptive field in the frequency domain. The slanted bar associated with each dot signifies the oblique angle of the spectral receptive field for the corresponding neuron. The length of the bars represents the eccentricity of the receptive field. This collection of oblique angles is further visualized as a bar plot in Figure 4E.
However, we want to stress that Fig 3D primarily serves to introduce Figure 4E. It does not aim to advance any argument. Please refer to the supplementary material (Line 117) for more details.
- **Q5. line 251 refers to figures 3A and 4A...**
Thanks for your correction! In line 251, we changed 'Fig.3(A)' to 'Fig. 4'.
We will also adjust the arrangement of panels E and F.
- **Q6. Figure 4B middle panels...**
The middle panels of Figure 4B aim to represent the neuron's connectivity by visualizing the adjacency matrix. The heat map indicates the neuron's connectivity (cosine similarity in the adjacency matrix) from a selected local region to other regions. The warmer the color, the higher the similarity. The two figures depict how units with high activity establish long-distance connections to resolve the aperture problem when subjected to Gabor (ambiguous motion) stimuli. In contrast, the plaid stimuli (determined motion) suppress these long-distance connections. We will provide details in captions.
*More details*:
Fig. 4 (B) compares spatial motion integration between 1D Gabor motion (left) and 2D plaid motion (right). Humans are able to perceive global downward motion only in the former case. In the latter case, local integration of motion signals takes priority over global integration. Once the local ambiguity is resolved, the global integration process is suppressed. Our model can predict such adaptive motion pooling in human visual processing. (K. Amano et al., 2009)
---
We hope our responses address the reviewer's concerns and appreciate the reviewer's constructive feedback.
Best,
Authors | Summary: This paper proposes a novel model of motion analysis. It takes inspiration from biological motion processing to try and solve the aperture problem, leveraging a combination of biologically inspired constrained structure with learned parameters. This produces strong partial correlations of motion components in empirical tests, but also interesting emergent phenomenon when investigating the distribution of unit responses between the different network components.
Strengths: - This paper does a good job of anchoring model development with a strong motivation and justifications based on biological understanding of motion processing. This is not an easy task, and the paper does a good job of finding compromise between the performance-driven approach of data fitting with the desire for an explanatory model that provides insight into why a certain performance or behaviour is achieved.
- The paper is quite dense with a wide range of ways to explore the problem and the model. This is both a strength and a weakness, as it sometimes makes the paper difficult to follow, but the range of analysis is impressive.
Weaknesses: - Missing reference: Tsotsos et al., "Attending to visual motion", CVIU 2005
-- This paper provides a biologically motivated model of motion including attentional effects and structure informed by our understanding of visual areas V1, MT, MST, and 7a in the primate visual system.
- Figure 1 is rather confusing; there's a lot going on, and I found flow of information rather unclear. If possible I would recommend breaking it up into separate figures or finding a way to show how the different components connect in a clearer manner. For example, is C a sub-component of A, or just some example receptive fields? If the latter, why is it part of the network diagram? Similarly, in the text F is given as having dimensions HWx256 (Line 140), but in the RMIB section of Figure 1 it looks like it is supposed to have HxWxC dimensions. Is this a different F? This was not clear.
- The partial correlation metric should be clearly defined; I did not see a definition for this in the paper. It would be better to have more discussion of the aspects that the model does not excel at compared to competing models (e.g. FlowFormer for v.s. Human and RAFT for vs. GT in Table 1).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - In the caption for Table 1 it states that uv = Cartesian space, dir = speed, and spd = direction. I assume dir should be direction and spd should be speed?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations seem to be adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thanks for the constructive feedback from the reviewer.
-----
**Missing reference**:
We will include the recommended reference: Tsotsos et al., "Attending to visual motion," CVIU 2005, in the introduction and discussion part.
**Clarification of Figure 1**:
We understand that Figure 1 seems overcrowded and potentially confusing. We will work on refining it for improved clarity and coherence.
As for the reviewer's specific questions, 'Fig. 1C' is indeed a sub-component of 'Fig. 1A'. We designed the model such that there are 256 different motion energy units in Stage I and 'Fig. 1C' visualizes one of these units. This unit consists of a quadrature pair of spatial and temporal filters, forming a spatiotemporal slanted receptive field. We will make this relationship clearer in the revised manuscript.
As for the dimensions of F, we apologize for the confusion. The correct description should be $F \in \mathbb{R}^{H×W×C}$ where $C = 256.$ (Line 140)
*Here we provide some additional context:*
In our model, C represents the number of channels in a given feature map, which in this case is 256. Hence, we have $H×W×256$. This is equivalent to $HW×256$; the only difference lies in the shape in which we represent the same quantity. When we say $F \in \mathbb{R}^{H×W×C}$, we refer to a 3D space. On the other hand, $HW×C$ represents the same set of elements but flattened along one dimension—akin to reshaping a 2D image into a 1D sequence. This change in representation is primarily for the convenience of matrix product calculation to get the adjacency matrix. While this convention is commonly used when defining tensor shapes in the context of neural networks, we acknowledge that it may not be mathematically rigorous.
We will include these clarifications in our revised manuscript.
**The partial correlation metric should be clearly defined; I did not see a definition for this in the paper. It would be better to have more discussion of the aspects that the model does not excel at compared to competing models (e.g., FlowFormer for v.s. Human and RAFT for vs. GT in Table 1).**
The partial correlation we used is defined as:
$$
\rho_{\text {model }}=r_{\text {resp model } \cdot G T}=
\frac{
r_{\text {resp model }}-r_{\text {resp } G T}\cdot r_{\text {model } G T}} { \sqrt{1-r^2_{\text {resp } G T}} \sqrt{1-r^2_{\text {model } G T}}}
$$
where $r$ is the Pearson correlation. Due to page limitations, we briefly mentioned the partial correlation from Line 283 but did not detail the specific equation definition. To ensure clarity, we will provide the definition of partial correlation in our supplementary material.
Here, we would like to explain the reason for using partial correlation, combining the question of "*FlowFormer for v.s. Human and RAFT for vs. GT.*"
We recognize that our model's optical flow prediction performance may not be as high as other SOTA models. However, it's important to note that our main goal is to model human motion perception. Therefore, we prioritize biological plausibility over pure performance.
The SOTA models are intentionally designed to achieve the best performance in matching ground truth (GT) data. In contrast, we incorporate additional constraints on our current models to trade off biological plausibility, such as motion energy computation. Hence, it is not appropriate to rely solely on a simple correlation to GT as an index for comparing the capabilities of our model with other SOTA models.
On the other hand, we believe that considering the correspondence of the model prediction to human response provides a fairer index for comparing models. However, previous psychophysical studies have highlighted that human response is inherently highly correlated with GT [ref]. This correlation might lead to the observation that any models trained to fit the GT are also highly related to human response, regardless of the model design. Following the previous study's strategy, to address this potential confusion between human-model correspondence (true evaluation) and human-GT correspondence (confounding), we employ partial correlation, controlling of GT, to purify the true model-human correspondence. More intuitively, we exclude the covariance between GT and the model prediction and also between the GT and human response. Therefore the partial correlation we used is only related to how strong model prediction could explain the variance in human response, which turns out to be a pure correlation between humans and the model.
Based on the partial correlation metric evaluation, our model surpasses the state-of-the-art models listed in Table I. In addition, the pure EPE data proves that our model closely matches human responses in complex natural environments, whereas other state-of-the-art models tend to align more with the ground truth.
We hope the above explanation provides you with a better understanding.
We will detail these points further in our revised manuscript.
**Caption for Table 1:**
Thanks for noticing this mistake. 'dir' should indeed denote 'direction,' and 'spd' should denote 'speed.' We will rewrite the caption to:" uv, dir, spd represent motion components in Cartesian space, direction, and speed, respectively. "
[ref]
- Yang, Y. H., Fukiage, T., Sun, Z., & Nishida, S. Y. (2023). Psychophysical measurement of perceived motion flow of naturalistic scenes. Available at SSRN 4414877
----
We are appreciative of your time and effort spent reviewing our work and giving us valuable insights.
Best,
Authors | Rebuttal 1:
Rebuttal: We provided more comparison results in the attached PDF.
Pdf: /pdf/4b618ff20a505b27e2c446bd7b6593600198cc9c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a new V1-MT model using a normalized Gabor model of V1, followed by a recurrent self-attention stage. It uses dense optic flow as a supervised objective. The authors perform extensive in silico neurophysiology to show the model units qualitatively look like V1 and MT. They also show that this model is a better match to human visual perception on natural scenes than computer-vision models.
Strengths: - Well-written and clear
- Breadth of in silico experiments
- The model makes sense and I’m excited about the idea of segregation vs. integration as an explanation for complex receptive fields in MT
Weaknesses: - Pretty incremental in a crowded field
- Little comparison to SOTA
- Mostly qualitative and little quantification
I really want to like this paper: full disclosure, I’m a big fan of the work of Orban et al. and more recently Cui et al. (2013) that shows that MT cells have complex receptive fields capable of integration and segregation. However, this paper kind of rubbed me the wrong way by ignoring the state-of-the-art in this field: “Despite extensive research in cognitive neuroscience, image-computable models that can extract informative motion flow from natural scenes in a manner consistent with human visual processing have yet to be established”. There are plenty of image-computable models that can extract information relevant to dense optic flow from natural scenes. Everything from the old work of Simoncelli and Heeger to the receptive field models empirically derived from MT of Nishimoto and Gallant (2011) to MotionNet from Rideaux et al. and especially to Mineault et al. (2021), inexplicably uncited in this manuscript despite being published in these very pages and having very similar motivation to this manuscript. The onus is on the manuscript to show us some failings of these previous models and how the model does better according to some axis.
To be fair, the paper could say that some of these networks–especially generic 3d CNNs without multi-scale representations and explicitly organized direction and speed tuning–do not extract quantitative optic flow information *explicitly*. However, consider this thought experiment: if I wanted to read information from a patch of MT, as a psychophysical observer presumably needs to do to solve the tasks in the paper, I wouldn’t have access to a neat organization of direction and speed tuning, either; I would need to do a readout on top of the patch. I don’t buy the premise that the brain needs to form an explicit estimate of dense optic flow at every point in space, e.g. in MT. I think it’s easy enough to turn these old models into ones that estimate dense optic flow using a linear readout, and these should be compared to the proposed model.
The paper spends a lot of time rehashing the same kind of qualitative receptive field exploration that’s a hallmark of this field: tuning curves, component vs. pattern, speed tuning curves, distribution of preferences for direction tuning, barber poles, reverse phi, etc. This is a very crowded field between all the papers from Rideaux, Welchman, Fleming, Mineault, Bakhtiari and Pack: these kinds of demonstrations have been done over and over again. It’s nice to have it in the paper but I really consider these a sanity check, not a finding, especially since this paper bakes in Gabor receptive fields in the first layer: how could they not learn direction selectivity? My advice, speed run through these to give more space for the end of the paper, which is where things get interesting.
Definitely the most interesting thing about the paper, in my view, is Table 1. It shows that the network knows something about how human motion estimation operates that is not captured by CV models. It’s a bit buried in the paper and explained a bit fast. The authors should add ablations to figure out what it is about this network that makes it more brain-like - the Gabors? the normalization? the attention mechanism? I also think they need to add in quantitative comparisons with MotionNet from Rideaux et al. and DorsalNet from Mineault et al. with a linear decoder on top–to be clear, I’d be fine with a linear decoder trained on another task. You could avoid doing a linear decoder by using an RSA-based analysis, or you could use other methods of alignment which are more restrictive than a linear decoder (e.g. from Alex Williams et al.), if you believe this is not an apples-to-apples comparison.
Overall, I think this could be a valuable contribution to the field, but it needs to be very explicit about its specific contribution in light of plentiful previous work, and it needs explicit comparisons to this previous work. I would be happy to accept granted the authors cite and address previous literature and include quantitative comparisons to MotionNet and DorsalNet, provided their model comes out on top either according to the metrics they have in Table 1 and Figure 5 or some other relevant metric they find.
Nitpicks:
- I liked the convention of using red color for trainable parameters, but the authors only use that once and that drop that later. Would recommend doing it consistently throughout the methods.
- Bottom of Page 4: “Top side of Fig. 3” → Should be a reference to figure 1. There are a couple more instances of this, so the authors should go carefully through the manuscript to find if there are more instances of that.
- The model is fairly bespoke but it’s not particularly biologically plausible with its attention mechanism. If the point is to implement recurrence, why not just use a plain transformer with tied weights at every layer instead? To be clear, the authors don’t have to fit this model, just a sentence or two to justify why they picked this architecture rather than something more off-the-shelf.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for thoroughly reading our research and comprehensive comments.
Firstly, we appreciate the reviewer's nitpicks and will make the suggested modifications.
---
**General Response:**
- We should clearly state the purpose of our study. From a scientific perspective, we aim to explain a wide range of psychophysical phenomena, including those whose physiological mechanisms are not yet clear, while aligning the model's internal representations with physiological ones. Engineering-wise, we aim to make a human-aligned model that maintains competitive performance with SOTA CV models. In contrast, the purpose of MotionNet and DorsalNet was to explain neural responses to visual motion stimuli. They are good models for this purpose, but they do not give a dense optical flow. We cannot compare the outputs of these models with human motion perception, our model, and CV models.
- We do not believe in an explicit representation of dense optical flow in the brain. However, given that it is possible to reproduce a high-resolution flow from human responses (Yang et al., 2023) and that many motion phenomena include interactions of local motion signals, constructing a model like ours would be useful for a computational understanding of motion perception.
- The reviewer suggested that MotionNet/DorsalNet with a simple linear encoder to compute a dense optical flow might outperform our model. Accurate dense optic flow estimation is a difficult task that requires complex and long-range spatial interactions to tackle large jumps and the boundary effect. FlowNet (Dosovitskiy et al., 2015), the first optical flow model to use a multi-layer stacked CNN, has more than ten times the parameters of MotionNet but still lags behind SOTA models in Table I. The CV SOTA models employ many sophisticated strategies to match GT and better predict human responses. In the attached PDF, we provide additional results obtained with three relevant models: a pre-trained DorsalNet with a linear flow decoder, a general 3D CNN (basic structure for MotionNet and DorsalNet), and FFV1MT (Solari et al.,2015), the only model known to calculate a dense optical flow with simple decoding of the Simoncelli & Heeger V1-MT mechanism. These models fall short in terms of dense optical estimation accuracy or fail to account for the global integration of local motions. This outcome suggests that a simple linear decoder cannot handle dense optical flow, particularly in complex real scenes. Furthermore, in addition to flexible spatial pooling and accurate estimation of dense optical flow, our model displays consistent human-perceived Fourier motion in missing fundamental stimuli by motion energy computation simulating the V1 neurons. Comparison with other models shows how difficult it is to achieve such diverse capabilities **simultaneously** in a single model.
- We understand the concerns regarding our model being 'over-designed' compared to general 3-D CNNs, and we provide more explanation in the appendix.
- It is not a simple sanity check to show that our model achieves internal representations similar to MT concerning component/pattern motion and spectral receptive field, since stage II of our model is a complex recurrent self-attention network. RSA-based model comparison is an interesting idea to test in the future, but in this study, we do not propose our model outperforms MotionNet and DorsalNet in explanation of MT/MST responses to specific types of motion stimuli used previously in the labs. Our preliminary analysis suggests that our model best explains human-perceived flow in naturalistic scenes because the attention network can do something similar to vector decomposition (Johansson, 1973).
- The neural implementation of the attention mechanism in our model design might look too complicated and biologically implausible. However, substantial psychophysical evidence necessitated a function for flexible spatial information integration, which the attention mechanism and graph structure capably deliver. Simple convolution stacking may be able to yield similar outcomes but will make the model more complex and biologically uninterpretable.
We appreciate the reviewer's insightful feedback. In the revised manuscript, we will clarify how our study differs from previous studies, especially DorsalNet and MotionNet, and make modifications based on the reviewer's minor comments.
Best, Authors
---
**Appendix:**
To model a function that allows flexible integration of spatial information, we proposed the attention mechanism and graph structure to accomplish this effectively. Stacking simple convolutions could achieve similar results but is restrictive, setting an upper limit on long-range spatial relationships. For instance, the large 2K resolution Sintel dataset we used for comparison with humans requires hundreds of (3x3) convolution layers for single-time information propagation (Zhao et al., 2017). Stacking such many layers is unreasonable, leading to model bloating, limited long-distance interaction, and convergence issues. In contrast, our model enables graph topology, and the attention mechanism allows global interaction with one-layer operation. MotionNet's simplicity is due to its limited testing stimulus of 32x32, which is manageable with a few CNN layers. However, this approach isn't suitable for larger images due to physical long-distance information limits. Plus, with too many CNN layers, it becomes challenging to comprehend the exact functionality of each layer. Instead, our model is designed for clarity of function: the first layer extracts local energy, while the second layer manages motion integration and segregation.
Our use of the attention mechanism also offers the possibility of understanding motion integration by visualizing the adjacency matrix, as shown in Appendix Fig. 1 of our supplementary materials. This level of understanding would not be achievable by simply stacking CNN layers.
---
Rebuttal Comment 1.1:
Comment: Very cool! Glad you followed through on the evaluation. I've bumped up my rating. | null | null | null | null | null | null |
D$^2$CSG: Unsupervised Learning of Compact CSG Trees with Dual Complements and Dropouts | Accept (poster) | Summary: D2CSG presents a neural architecture for inferring CSG programs that reconstruct complex 3D shapes. The architecture is composed of two branches, a cover branch and a residual branch, which are differenced from one another to form the complete shape. The work is largely an extension of, and a strong improvement over, CAPRI-Net, wherein the major differences are: (i) the use primitive complements, (ii) separate primitive sets for the cover and residual branches, (iii) a third training stage, termed dropout, encouraging program sparsity, and (iv) switching from an encoder to an auto-decoder framework. Finding a CSG program to represent a target shape involves "overfitting" the network in a test-time optimization scheme, with an occupancy-based reconstruction loss (with regularizing terms, and various relaxations). Compared to previous neural relaxation approaches for CSG program inference, D2CSG net find programs that result in much better reconstructions (both quantitatively and qualitatively).
Strengths: The paper presents compelling, and well-supported, evidence that D2CSG provides a marked improvement for the task of 3D CSG program inference (and more generally shape abstraction). The insights that contributed to this improvement, while not a dramatic departure from past work, are sound and sensible.
Based on the qualitative results, and quantitative metrics, its clear that D2CSG outperforms a reasonable set of baseline methods, offering a new state-of-the-art bar for CSG program inference. The ablation experiments, in both the main paper and supplemental, are very helpful and generally well-structured. The fact that D2CSG is able to capture a provably larger set of possible shapes than CAPRINet, due to the use of complements in the primitive set, is also a nice property.
Weaknesses: Probably the biggest weakness of the paper is that its methodological improvements are largely incremental on top of the CARPINet system. In my opinion, this shouldn't disqualify the paper from being accepted. Despite the similarity to CARPINet, D2CSG offers real, supported improvement, and would be useful, and of interest, to the community. This type of paper requires strong experimental results, in the form of comparisons to related work and well-crafted ablation conditions, both of which are provided. To give a higher rating, beyond "accept", it would be useful to show that the insights found useful in improving CAPRINet to D2CSG could be extended to other architectures / systems (e.g. they would have more general applicability than this one system). While I think this could be possible, its by no means a given.
```Ablation issues```
There were a couple of small questions/concerns I had surrounding the ablation experiments:
- Is the top row of table 3 just CAPRINet (from the tables this looks like the case). But CAPRINet also has the encoder/decoder vs auto-encoder difference, so I would expect this to be a different architecture (e.g. the CARPINet without pretraining, provided in the supplemental). Please clarify what this row is supposed to report.
- I think it would be important to add rows to Table 3, where just DB, and just DO, are added to row 1. It would be fine to report these results in the supplemental material, but having access to their performance would help disentangle the effects of each change between systems.
```Clarity```
While I wouldn't say that the paper was unclear, I think the overall clarity of the method could be improved. The paper is not very self-contained, much of the technical details are omitted from the main paper, and pushed to either past-work or the supplemental material. Due to space constraints, a full explanation of the training scheme and architecture is likely impossible to have in the main paper, but I think the main paper could still be improved to give a better *motivation* for why the architecture and training scheme is designed in the way that it is.
- Most helpful would be to add more details and annotations to Figure 2 (e.g. the D, Q, T, Con, and a matrices). If these can't fit in the figure, then sub-figures showing parts of the pipeline should be introduced (e.g. to the supplemental).
- In the text, introductory paragraphs should be used to set the stage for the various matrices, and what information they are suppsoed to hold. For instance, there is no explanation given for what the "Con" matrix (first introduced on line 175) holds, beyond the formulas for how it is constructed. An additional paragraph at the start of 3.2, walking through all of the matrices that will be created (and what information they hold) would be very helpful.
- Similarly, while some technical details (though not all) are provided in 3.3 indicating how the loss functions change through the training stages, it would benefit the paper to have a high-level section at the beginning of 3.3 explaining *why* multiple stages are needed (while this reasoning might have been provided in past-work, it needs to be restated here, for self-containment). For instance, something like: in stage 0 no hard decisions are made for either (a) primitive to intersection members or (b) which convexes are added into union, then in stage 1 we make a hard decision for (b), and in stage 2 we make a hard decision for (a).
```Minor```
- It would be useful to know how the reconstruction metrics change when converting from quadrics to basic primitives for user-editing (especially compared with just directly optimizing basic primitives, and not quadrics).
- Some discussion of related work on shape program inference can be improved. [36] is not a supervised method, and should likely be grouped with [11], as RL-based methods for unsupervised shape program inference (see [1*]). Relatedly, [2*] should also be added to this section, as an alternative to per-shape optimization for 3D CSG inference.
- typos/phrasing:
- L:27, genera
- L:163, are float
[1*] Neurosymbolic Models for Computer Graphics, Eurographics 2023 STAR
[2*] PLAD: Learning to Infer Shape Programs with Pseudo-Labels and Approximate Distributions, CVPR 2022
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors:
- If I understood the dropout operation, its based on finding changes to the primitive / intersection contributions that make little difference (e.g. a small delta) to the output binarized shape from the network. I'm surprised that it isn't instead based in deltas to the networks reconstruction loss (e.g. difference versus the target shape). Did you explore any types of dropout operations that considered the target occupancy values?
- The conclusion has the following statement: ```We also have ample visual evidence that the CSG trees obtained by our method tend to be more natural than those produced by prior approaches```. Looking through the decompositions in the supplemental, I'm unconvinced that many people would posit the decompositions as `natural' -- I would consider either adding additional evidence to support this claim, or removing it.
- The conclusion implies that for datasets like ABC, that lack a consistent global part decomposition, 'overfitting' is more acceptable, with the subtext that its not helpful to learn over more than one shape at a time. I would encourage the addition of a little more nuance to this statement: while ABC-type shape are certainly less globally consistent compared with other domains (like ShapeNet chairs), mechanical parts share clear commonalities with one another, especially at the sub-component/local level, and other methods have found success in learning over distributions of these types of shapes (e.g. [46]). Thus, instead of tying the the merits of `overfitting' to this line of argument, it would be better to let it stand on its own.
- (minor) I'm about confused about Eq 7. From the explanation of the method, I would have thought that W is ignored after stage 0 is done. In table 1, it looks like it is not used at all in stage 1, but then re-introduced (and binarized) in stage 2. Line 109 in the supplemental indicates that stage 2 training is the same as stage 1, but how can this be the case if W is re-introduced (is it frozen in stage 1, then unfrozen/modified for stage 2)? I would request additional clarification on this mechanism.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: While the `overfitting' setup certainly seems to improve reconstruction performance (at least for D2CSG, though interestingly not for CAPRINet), there are certain downsides of moving away from a shared latent space: e.g., ill-conditioned inference tasks (e.g. program from partial point cloud, or program from image), and shape to shape interpolation, are no longer possible. It would be good for the paper to discuss the impact of this design decision a bit (beyond the change in reconstruction).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **it would be useful to show that the insights found useful in improving CAPRINet to D2CSG could be extended to other architectures / systems.**
**A:** The dual complementary idea can be applied to other primitive-based methods for better concavity reconstruction. For example, ExtrudeNet only uses the union of extrusions to reconstruct a shape; it is possible to introduce complement extrusions to better handle concavity. The dropout idea can also be applied to other assembly-based systems as long as they employ intersection or union assembly operations.
* ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing, ECCV 2022
**Ablation issues.**
**A:**
* Good point, we show CAPRI-Net with pre-training and encoder at the first row here, while other rows are performed without pre-training. It would be better to show CAPRI-Net without an encoder here, see updated Table 2 in the PDF file attached to the global response
* The existing rows can prove the efficiency of our designs. We also add two additional rows and visualization results to help disentangle the effects of each change, see results in the PDF file attached to the global response.
**Clarity.**
**A:**
* We can revise Figure 2 as suggested.
* We can add introductory paragraphs for the various matrices.
* We can provide high-level details in Section 3.3 about the multi-stage training.
**It would be useful to know how the reconstruction metrics change when converting from quadrics to basic primitives for user-editing.**
**A:** We will provide them in the supplementary material.
**Some discussion of related work on shape program inference can be improved.**
**A:** We will improve them as suggested.
**Q1: Did you explore any types of dropout operations that considered the target occupancy values?**
**A:** The dropout module is designed to remove primitives/intermediate shapes that have little effect on the reconstructed shape. The $\Delta S$ is calculated based on the difference between the reconstructed shapes before and after dropout. We did not calculate $\Delta S$ between the target shape and the reconstructed shape since it is hard to determine a proper threshold. If we set it smaller, then many complicated shapes would not be considered for dropouts since they usually have a large difference to the target shape. if we set the threshold larger, then many primitives modeling shape details could be dropped as the number of query points is less around such details. In our design, all shapes will have the chance to be considered for dropouts while keeping the shape after dropouts close to what it was before.
**Q2: I'm unconvinced that many people would posit the decompositions as `natural'**
**A:** We can tone this down in the revision.
**Q3: I would encourage the addition of a little more nuance to this statement.**
**A:** Thanks for your advice. We will reword the conclusion and add the nuances as the reviewer described by mentioning that "some of the models in ABC may not possess sufficient generalizability in their primitive assemblies" to avoid giving the false impression that all ABC models including mechanical objects lack commonalities in structure.
**Q4: I'm confused about W in Eq 7.**
**A:** At stage 0, values in W are set close to 1 (line 221). At stage 1, W is not used and is not updated. At stage 2, W is re-introduced and updated (line 223). Although the reconstruction loss $L_{rec}^*$ is the same in both stages, the $a^*$ in $L_{rec}^*$ is derived from different equations for stage 1(Eq 3) and stage 2 (Eq 7). In addition, the gradients from reconstruction loss $L_{rec}^*$ won’t be used to update W at stage 2, W is only updated by dropout (line 223) at stage 2.
**Q5: Discuss more impact about using overfitting.**
**A:** We will add such discussions as suggested. Thank you.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed and well-written response. I remain very positive on this paper, and would like to see its inclusion to the conference. | Summary: The paper presents an unsupervised network learning method for reconstructing CSG trees from CAD models. The network is an enhancement of CAPRI-NET, and features the fixed operations (from bottom to top) of intersection -> union -> difference on primitives modeled by quadratic surfaces, where the difference is always applied on two intermediate shapes produced from subtrees. The enhancement to CAPRI-NET is the allowance of difference in the form of inverse primitives before intersection, which the paper shows to cover all possible CSG trees in contrast to the incomplete coverage of CAPRI-NET. In addition, the paper uses dropout pruning to reduce the redundant primitives or intersected intermediate shapes. Through experiments on ABC and ShapeNet datasets and comparisons with previous works on unsupervised CSG reconstruction, this paper shows improved results with good reconstruction accuracy and compactness. Ablation studies further show the usefulness of the dual branch design, inverse primitives, and dropout pruning.
Strengths: The paper shows the limitation of a previous work CAPRI-NET in representing all possible CSG trees and fixes it with a simple addition of inverse primitives.
The paper introduces dropout pruning to improve compactness of result CSG trees.
Extensive tests have shown the new algorithm can recover more compact and faithful results than previous works.
Weaknesses: It's not very clear how the dropout pruning is applied in the training process. Is it used in the last step and after its application there will be no network finetuning? Or the interleaved application of dropout and finetuning can be more helpful? It's desirable that the authors provide a detailed study on this issue.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In addition to questions above, there are some detailed questions:
1. Could the primitives before intersection be shown in the expanded CSG trees? That would help readers better understand the complexity and details of the results.
2. Is there any intuitive understanding of the learned weighting vector W in Eq(4)?
3. Why $\Delta S$ of Eq(6) is not normalized by the number of sample points $n$?
4. Line223, should it refer to Eq(3) instead?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed limitations in generalization, detail recovery and limited expressiveness of quadratic primitives.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **It is not very clear how the dropout pruning is applied in the training process.**
**A:** Dropouts are not applied in the last step but during the training process; please refer to lines 221-222 in the paper. After dropouts, the network parameters will be tuned. We iteratively perform this process until the maximum number of iterations has been reached or when no primitive/intermediate shapes are dropped.
**Q1: Could the primitives before intersection be shown in the expanded CSG trees.**
**A:** Since the complicated shapes in our supplementary material usually use many primitives, we only show a part of the obtained CSG trees. We show the primitives before intersection for the simple shape in our pipeline, as shown in Figure 2 of the main paper. We also show additional primitive visualization examples, see Figure 1 in PDF file attached to the global response.
**Q2: Is there any intuitive understanding of the learned weighting vector W in Eq(4)?.**
**A:** Values in W are learned close to 1 after stage 0. This setup will help the union layer avoid using the min operation in Eq(3) at stage 0 and all intermediate shapes could have gradients in the early stage.
**Q3: Why $\Delta S$ of Eq(6) is not normalized by the number of sample points.**
**A:** Since the number of sampled points is constant, normalization will not change the learning process. In our experiments, we sampled a fixed number query points from the shape before dropouts at stage 2 to calculate $\Delta S$. We will clarify this part in the revision.
**Q4: Line223, should it refer to Eq(3) instead?.**
**A:** Yes, Line 223 is equation (3). Thanks for spotting this mistake.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses. | Summary: The paper proposes a reconstruction approach to building constructive solid geometry (CSG) from other 3D modalities like meshes and point clouds. The key contribution of the paper is a dual representation that considers both the shape and its complement that are built with Boolean intersection and union operations with a set of primitive convex quadric surfaces and their inverses. Two branches in a fully differentiable neural network are responsible for generating the shape and the complement that is Boolean subtracted to output the final CSG representation of a target shape. The dual representation is general and enables complex shapes to be captured well as demonstrated by extensive results.
Strengths: The paper is well written and easy to follow. It offers a competitive solution to an important inverse problem in CAD, and the proposed representation is novel, general and backed by strong empirical results in two standard datasets. The design choices are well justified and evaluated. The decision to overfit to a target shape rather than attempting to generalize is sensible, but see also a related point in weaknesses.
Weaknesses: One weakness of choosing an overfitting approach as opposed to learning from datasets is challenges related to robustness to noise and outliers. The method appears to be sensitive to how clean the input geometry is, and with noisy pointclouds or low resolution meshes, it is very likely for unintended sliver geometry to be constructed. This is apparent in Figure 4, although the proposed method does perform better than others. Some discussion on this would be beneficial in the limitations or future work section.
Another weakness is the choice of using quadric surfaces as primitives. CAD shapes are typically built with prismatic primitives like planes, cylinders, etc. and while quadric surfaces can represent most of such surfaces, optimizing in this representation is unlikely to reconstruct the exact prismatic primitives as seen in the results throughout the paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: In the related work under Deep CAD Models, it appears that BRepNet, UV-Net and SB-GCN are attributed as reverse engineering models while these are all encoders.
Some missing citations that might be worth adding:
- https://dl.acm.org/doi/abs/10.1145/3528223.3530078
- https://dl.acm.org/doi/abs/10.1145/3550469.3555424
References [46] and [47] are duplicates
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **One weakness of choosing an overfitting approach as opposed to learning from datasets is challenges related to robustness to noise and outliers.**
**A:** Yes, unintended geometry could be produced for noisy point clouds or low-resolution meshes. This is a general issue for overfitting-based methods. Incorporating D$^2$CSG with other point cloud denoising or upsampling modules would consititute interesting future work for robust CSG reconstruction. We can add this discussion in the revision.
**Another weakness is the choice of using quadric surfaces as primitives.**
**A:** We did discuss the limitation of quadric primitives in the conclusion. CAD shapes are not only constructed by prismatic primitives but also by other complex primitives, such as NURBS. However, we consider D$^2$CSG as a key step toward learning general and compact CSG assembly sequences. The quadric primitives in D$^2$CSG can be easily changed to other primitives, and we find quadric primitives can achieve the best fitting results compared to prismatic primitives; see Table 2 in the supplementary material. In addition, when the shape can be approximated with simple primitives, we can convert/approximate quadric functions with simple primitives (please refer to line 306).
**Q1: Revise related works and add citations.**
**A:** Thank you for these suggestions. We will incorporate them in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I am retaining my score and would be happy to see this paper accepted. | Summary: This paper proposed a method for unsupervised learning of CSG trees from mesh or point cloud. An auto-decoder approach is used, i.e., each training shape is represented by a learned latent code. Compared to previous approach CAPRI-Net, the proposed approach used complementary primitives and dual branch design to represent shapes in more complicated and accurate CSG primitives. Also, the proposed approach used dropout during training to avoid redundancy of the inferred CSG tree. The method is validated on ABC dataset and ShapeNet.
Strengths: - The idea of complementary primitives and dual branch are novel and effective.
- The proposed method significantly outperforms previous method both qualitatively and quantitatively.
- The experiment section is conducted extensively, which supports the claims well.
Weaknesses: - Some important technical details are not clear (see questions).
- It will be easier to follow the paper if there are some visualizations explaining the insights of complementary primitives and dual branch.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: - How is the T matrix predicted from the network?
- The residual loss appears in the figure-2 of main paper, it will be helpful to move its definition to the main paper.
- Many components of the paper is based on CAPRI-Net, it will be helpful to provide a compact background section for CAPRI-Net in the paper.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **It will be easier to follow the paper if there are some visualizations explaining the insights of complementary primitives and dual branch.**
**A:** We showed additional visualization results for the ablation study, see Figure 2 in the PDF file attached to the global response.
**Q1: How is the T matrix predicted from the network.**
**A:** The T matrix is not predicted from the network but set as learnable parameters in the network.
**Q2: The residual loss appears in the figure-2 of main paper, it will be helpful to move its definition to the main paper..**
**A:** Thank you for pointing out this issue. We put details of the cover loss and residual loss in the supplementary material. We will mention how we use the cover loss and residual loss in the main paper.
**Q3: Many components of the paper is based on CAPRI-Net, it will be helpful to provide a compact background section for CAPRI-Net in the paper.**
**A:** We plan to add this compact background section to briefly introduce the components adopted from CAPRI-Net as space allowed. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful comments and encouraging remarks. We are glad to see reviewer recognitions that our approach is “novel,” “effective,” "compelling,” and “significantly” outperforms existing methods. Since the reviewer questions were all technical in nature, seeking more details and clarifications, we will answer them in the individual responses. The submitted PDF file attached to the global response includes additional results.
Code and data will be released upon paper acceptance.
Pdf: /pdf/be16b1c331bbffd56a95599059246675485e106d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a novel, unsupervised method to reconstruct the CSG tree given a 3D shape. The authors prove that all CSG trees can be formulated with a boolean difference operation as the last step. Therefore, to generate the final reconstructed shape, the proposed method uses two branches, cover and residual, to produce the shapes used to keep and remove in the boolean difference operation. To further enhance the model's capability of generating all possible shapes, the inverse convex primitives are introduced to incorporate the boolean difference operations other than the last step. The proposed method also improves compactness using dropout. The results demonstrate significant improvement over prior works qualitatively and quantitatively.
Strengths: 1. The proposed method significantly improves the reconstruction quality, which can be easily observed from the qualitative results in Fig 3.
2. The paper is well-written and provides most of the required details (both in the main text and supplementary).
3. The authors demonstrate that the model can reconstruct the CSG trees from general and complex 3D shapes in both ABC and ShapNet datasets, and provide shape editing capability for downstream CAD tasks.
Weaknesses: 1. The generated CSG trees might not look natural to designers and engineers since they usually have some shared patterns and preferences when modeling 3D shapes. Moreover, the modeling sequences are often related to the design and manufacturing intents. Therefore, it would be great to see user study results comparing the generated CSG trees with real modeling sequences (e.g., Fusion 360 Gallery Reconstruction Dataset).
2. Designers usually do not model 3D shapes simply using primitives. Most solid modeling tools in CAD are profile-based, meaning that users draw 2D profiles and use them to perform 3D operations such as extrude, revolve, sweep, chamfer, fillet, etc. Therefore, mapping the generated CSG trees into those operations will be necessary for practical use.
3. Point cloud representation can be challenging when reconstructing large objects with small details, such as the camping car example in Fig. 3.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Stages 0, 1, and 2 in the main text are not coherent with stages 1, 2, and 3 in Table 1.
2. How does the proposed method perform on multiple objects (or objects with disconnected parts)?
3. How do the point sampling density, quality, or methods affect the results?
4. Have the authors tried generating CSG trees with depths larger than two (e.g., using an iterative approach)? Is there any way to justify whether the current approach is sufficient to generate any shapes? In other words, do shapes generated in each branch have enough complexity?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes, the limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The generated CSG trees might not look natural to designers.**
**A:** We generally agree with the reviewer and there is definitely *more* that is left to do to get there.
One intrinsic difficulty is that there is no (single) ground truth for the CSG assembly of a given CAD shape. Each 3D shape could be represented by different modeling sequences, and different artists might have different design preferences: some may like the additive style, while others more used to the subtractive style. In addition, different modeling sequences also have different editing complexity for different modeling targets. Taking a more objective view, we currently focus on accounting for and striving towards the reconstruction accuracy and compactness of the CSG assembly when designing our network.
**User study results comparing the generated CSG trees with real modeling sequences (e.g., Fusion 360 Gallery Reconstruction Dataset).**
**A:** This is an interesting thought. However, the extrusion-based modeling sequences in Fusion360 are quite different from our primitive-based CSG assembly. The latter is our goal. We acknowledge that sketch-and-extrude is quite a natural editing and modeling paradigm for designers, e.g., see Google (Trimble) SketchUP. Learning those types of assemblies would involve a very difficult goal, e.g., see ExtrudeNet and SECAD-Net as recent attempts.
* ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing, ECCV 2022
* SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude Operations, CVPR 2023
**Designers usually do not model 3D shapes simply using primitives.**
**A:** A fair point; see our remarks above on sketch-and-extrude. To this end, we believe that D$^2$CSG could serve as a valuable supplementary approach to existing modeling paradigms. By utilizing extrusions and sweep operations to create primitives, we can employ D$^2$CSG to construct even more intricate shapes. Introducing broader and more versatile primitives into the D$^2$CSG framework presents an intriguing avenue for future research. We intend to include this discussion in our paper.
**Point cloud representation can be challenging.**
**A:** Yes, point clouds are not the best representations for details. One future work is to first use SOTA point cloud upsampling methods to produce details or use adaptive sampling to sample more points around details.
**Q1: Stages 0, 1, and 2 in the main text are not coherent with stages 1, 2, and 3 in Table 1.**
**A:** Thank you for pointing out this issue. We will address it in the revision.
**Q2: How does the proposed method perform on multiple objects (or objects with disconnected parts)?**
**A:** Our method is self-supervised and is robust to unseen shape structures and categories; see Figure 3. In particular, we use a shape-specific optimization technique, allowing the method to effectively accommodate diverse objects, including multi-part objects.
**Q3: How do the point sampling density, quality, or methods affect the results?**
**A:** Our method is no exception to the "garbage in, garbage out" concept. Less sampled points or low-quality points will produce worse results, see Table 1 in the PDF file attached to the global response.
**Q4: Have the authors tried generating CSG trees with depths larger than two (e.g., using an iterative approach)? Is there any way to justify whether the current approach is sufficient to generate any shapes? In other words, do shapes generated in each branch have enough complexity?**
**A:** In the supplementary material, we provide a proof that the current approach is able to produce any shape. Please see Proposition 2 and its proof in the supplementary material (line 35).
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the rebuttal. I would love to see the primitive-based method and extrude-based method converge in the future. I will remain my rating as a strong accept. | null | null | null | null | null | null |
Non-Stationary Bandits with Auto-Regressive Temporal Dependency | Accept (poster) | Summary: The authors propose a bandit algorithm in the restless setting, when the rewards have an auto-regressive structure.
Strengths: 1. Contributes to the non-stationary case, in contrast to the vast majority of results in the stationary setting.
Weaknesses: 1. Knowledge of (single parameter) alpha, and the problem of estimating it simultaneously. It is unclear of the theoretical claims (in addition to the numerical results) hold when these are relaxed (ref. Sec 2)
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Some discussion of Exp3's limitation is already discussed in the lower bound section. Would be great to see a discussion on on adversarial bandits and how much tighter your results are compared to some of the algorithms there.
2. Unclear why the dynamic benchmark was chosen instead of the static one (other than it does seem suitable and is harder?). If instead we use the static benchmark, would there be any regret UB/LB gains?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: None identified.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback! We’ve addressed your questions below and will integrate the discussions into our revised paper.
**Regarding learning the AR parameter,**
* In Sec. 8, we introduced a MLE-based approach that learns the AR parameter. We provided theoretical guarantees for our estimation (Prop. 8.1), and also numerically showed that the performance of our algorithm remains robust and competitive even when there exists noises in estimated AR parameters (Fig. 2).
Specifically, our numerical results reveal that AR2 is robust to the lack of knowledge of $\alpha$. For instance, in experiments we conducted for Fig. 2, performing MLE with only 50-150 data points and using the estimated AR parameter in AR2 would only increase normalized per-round regret by at most 4% compared to using the true AR parameter.
Given the promising numerical results, we believe that our MLE-based approach, which also handles heterogeneous $\alpha_i$’s (as noted in Sec.8), can be a useful method for practitioners to estimate the AR parameters in reality.
* Many decision-makers in practice also have some prior knowledge of the AR parameters given past data (see discussion in Sec. 2 and our case study in Sec. 7). This motivates our initial assumption of known AR parameters. Even if such prior knowledge isn’t fully accurate, using them as initial estimates can significantly ease our learning process.
* The main focus of our paper is to study how to use knowledge of the temporal structure to enhance online decision-making. With this in mind, we first focused on the AR-based bandit problem assuming knowledge of $\alpha$, and addressed learning of the AR parameters as a subsequent problem. Even with ample information on the underlying AR process, our non-stationary MAB problem remains challenging.
We appreciate your comment on the simultaneous learning of AR parameters. Given the focus of this paper, we will leave its theoretical analysis as an exciting future direction, and will add it to our future works section.
**Regarding comparison with Exp3,**
* We first highlight that Exp3 is designed for a completely adversarial setting where rewards can change arbitrarily. It is shown to be optimal against a weak static benchmark, which only considers a single best arm in hindsight, rather than a dynamic benchmark that chooses the best arm at every round. Further, it does not consider any temporal structure associated with the reward distributions. Hence in our setting where we intend to capture the best moving target, Exp3 doesn’t seem to be the most appropriate choice.
* In our numerical studies (Appendix A), we’ve already compared our algorithm with a variation of Exp3—the RExp3 algorithm from [Besbes et al. 2014]. RExp3 uses Exp3 as a subroutine, restarting at the start of each epoch. Similar to AR2, their restarting mechanism balances the "remembering" and "forgetting" tradeoff, improving Exp3's performance in non-stationary bandits with dynamic benchmarks.
Yet, there are key differences between RExp3 and our approach: (i) RExp3 assumes sublinear total variation in expected rewards, while our AR setup involves $O(T)$ total variation, and (ii) RExp3 does not use any knowledge of the temporal structure. Theoretical results for RExp3 merely suggest that its total regret would be $O(T)$ in our setup, but do not offer any per-round regret guarantees in terms of AR parameters $\alpha$ and $\sigma$. Our simulations (Table 2, Appendix A) also show that RExp3 does not perform well in the AR setup, as it's tailored for environments with limited changes (see Appendix A for more discussion).
* For completeness, we have additionally evaluated the performance of Exp3 and compared it with AR2 and RExp3. Please refer to _Table 3 in the PDF of the global response_. From the table, we see that Exp3's performance is very close to (sometimes slightly worse than) RExp3, which is expected considering their similarities. Nonetheless, AR2 performs much better than EXP3/Rexp3 in all types of environments, as AR2 leverages knowledge of the AR temporal structure to swiftly adapt to changes.
* Please also refer to Table 2 in Appendix A to see the comparison of AR2 with other algorithms that adopt the static benchmark (e.g., UCB, $\epsilon$-greedy, etc.). There, AR2 also displays superior performance.
**Regarding our choice of the dynamic benchmark,**
* Following our discussion on Exp3, if we were to adopt a static benchmark in an adversarial setting, the Exp3 algorithm could already yield a $O(\sqrt{T})$ static regret. Yet, as illustrated by our numerical results (Table 2 in Appendix A and _Table 3 in the PDF of global response_), algorithms tailored for static benchmarks, such as UCB and Exp3, do not adapt well to the rapid changes inherent to our setting.
* While we recognize that the dynamic benchmark presents a tougher challenge than the static benchmark and does not permit sublinear regret, it allows us to aim high and track the best moving target. This is especially important in a fast-changing environment that we studied, as no single arm can consistently dominate. Algorithms designed for static benchmarks tend to fall behind, as seen in our numerical studies. Essentially, the dynamic benchmark highlights the value of leveraging temporal structure in online decision-making, which is our main goal, and also where our algorithm excels.
* The real-world applications discussed in Sec. 1 also underscore the importance of adopting a dynamic benchmark. In practice, the decision-maker usually wishes to be agile and swiftly react to changes in the environment. For example, in online product recommendations, as product demand shifts, it's vital to quickly adjust to show in-demand items. Relying on a single historically "best" product can’t be optimal in such dynamic scenarios. The same rationale applies to predicting ad CTR, where ad platforms must pivot quickly in response to fluctuating CTRs.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for such a detailed response! | Summary: This work uses AR(1) model to model the non-stationary multi-armed bandit (MAB) problem. This paper considers a new performance metric, dynamic steady-state regret, and establishes lower bound on regret. Furthermore, This paper proposes the AR2 algorithm and provides a relatively tight regret upper bound.
Strengths: Previous works on non-stationary multi-armed bandit (MAB) problems have often assumed bounded non-stationarity, as only bounded non-stationarity allows for achieving sublinear dynamic regret. This work, however considers the setting of unbounded non-stationarity. This provides a new perspective on handling non-stationarity and addresses the evaluation of algorithm performance in the context of linear regret. Additionally, this paper introduces the concept of steady state error to quantify the algorithm's performance under these conditions.
Weaknesses: 1. In this paper, the AR(1) model is defined with α∈(0,1), which actually qualifies it as a stationary model. α would need to be at least equal to 1 to be considered non-stationary. For example, in scenarios involving bounded non-stationarity, the expected reward's variation bound is typically represented by \sum_{t=1}^T||r(t+1)-r(t)||_\infty, which corresponds to the case of AR(1) with α=1. Therefore, using this stationary AR(1) model ( α∈(0,1) ) to characterize non-stationary multi-armed bandit (MAB) problems seems to be unreasonable.
2. It seems unreasonable to assume that the expected reward has upper and lower bounds of [-R, R] in this setting. If we assume that the expected reward is bounded within [-R, R], even with random arm selection, the per-round dynamic regret should be at most 2R. This implies that any algorithm (e.g., algorithms without restarts) would eventually reach a steady state, which is unrealistic in a non-stationary environment. However, Theorem 5.1 in the paper does not seem to make this point clear: as α approaches 1, the per-round dynamic regret upper bound for random arm selection tends to infinity (which should be at most 2R when the expected reward is bounded by [-R,R]).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I hope the authors can provide an explanation for the issues I mentioned regarding the stationary AR(1) model, the bounded expected rewards, and Theorem 5.1.
If the response is reasonable, I will consider increasing the score.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: As mentioned in the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback! We’d like to address each of your questions below. We will also integrate the following discussions and clarifications into our revised paper.
**(1) Stationarity in time series vs. bandits.**
We appreciate your comments regarding the AR-1 model's stationarity and would like to clarify the terminology used in our paper.
* In time series analysis, we agree that the definition of a stationary time series is a stochastic process with a constant unconditional joint probability distribution, which characterizes the AR-1 model with $\alpha < 1$ as a stationary time series model.
* However, within the bandit literature, the terms "stationary" and "non-stationary" are used differently. "Stationary bandits" refer to setups where the underlying distribution of each arm remains fixed, while "non-stationary bandits" encompass any scenario where the reward distribution of each arm changes over time. In our context, since the expected reward $r_i(t)$ varies over time following the AR process and since the best arm (i.e., the arm with the highest expected reward) changes over time, our problem aligns with the non-stationary bandit category. In fact, our work delves into a notably more volatile environment than most studies within the non-stationary bandit literature (e.g., [10, 52]).
* We suspect the term “non-stationarity” might be causing some confusion due to its varied interpretations across different domains. We are open to renaming our setting to “dynamic bandits” if this helps alleviate potential misunderstandings and align terminologies with existing literature. We thank the reviewer for pointing this out and will make the necessary adjustments for better clarity in the paper.
* We'd also like to remark on the total variation term:
- In our context, the total variation you mentioned can be computed as follows. Recall that our AR-process is defined as $r_i(t+1) = \alpha r_i(t) + \alpha \epsilon_i(t)$.
Hence, the total variation is given by $\sum_{t=1}^T \Vert r(t+1) - r(t) \Vert_\infty = \sum_{t=1}^T \Vert (\alpha-1) r(t) + \alpha \epsilon(t) \Vert_\infty,$ where $r(t) = (r_1(t), \dots, r_k(t))$ and $\epsilon(t) = (\epsilon_1(t), \dots, \epsilon_k(t))$.
- Note that the total variation here scales linearly with the number of rounds $T$. To see this, consider a simple example where $\alpha = 1/2$, boundary $R = 1$, and for illustrative purposes, assume $\epsilon_i(t)$ independently takes value from $\pm 1$ with equal probability $1/2$. In this simplified scenario, for any $r(t) \in [-1,1]^k$, the per-round variation at any given round (i.e., $||r(t+1) - r(t)||_\infty$) would be at least $1/2$ with probability at least $1/2$. Therefore, the total variation is $O(T)$.
- This suggests that our AR-based environment is more volatile than settings characterized by sublinear total variation, such as the one studied in [Besbes et al. 2014]. Our established lower bound also aligns with their lower bound result, which states that if the total variation is $O(T)$, our best achievable total regret would be $O(T)$ (see Line 184-187 of Section 3). Hence, studying per-round regret and presenting upper bounds based on AR parameters $\alpha$ and $\sigma$ would provide better insights into an algorithm’s performance in our rapidly changing environment.
**(2) Per-round regret is bounded.**
* Given the bound $R$ on our reward, we agree with the reviewer that a more accurate statement of Theorem 5.1 would be: **any algorithm** (including the naive algorithm) would incur per-round regret of at most $\mathbf{O(\min(\sqrt{\log (1/\alpha \sigma)+\log k} \cdot \frac{\alpha \sigma}{\sqrt{1-\alpha^2}}, 2R))}$. We would update this in the revised paper to add clarity.
* The main purpose of presenting Theorem 5.1 is to show that under the setting where $\alpha$ is small (i.e., $\alpha < \bar{\alpha}$), the upper bound attainable by any algorithm (even the naive algorithm) stays close to the lower bound (refer to Figure 1). As Figure 1 also shows, when $\alpha \rightarrow 1$, the upper/lower bounds deviate from each other, highlighting the need to design an algorithm for the case of large $\alpha$ as we did in the paper (see Theorem 5.2).
* We’d like to further add that the lower bound presented in Section 3 scales with the AR parameters $\alpha$ and $\sigma$. If the constant $R$ is large enough, our lower bound would be quite far away from the bound of $2R$. Hence, assuming $R$ is large enough, our goal is to devise an algorithm whose per-round regret upper bound can be characterized in terms of the AR parameters ($\alpha, \sigma$), and ensure that its upper bound stays close to our lower bound.
* In our setting, in the absence of such boundedness, the variance of expected rewards under the stationary distribution would go to infinity as $\alpha$ goes to 1, and it then becomes impossible to design any useful algorithm for such a volatile environment. Further, note that the boundedness of rewards is a common assumption in the MAB literature (e.g., Lai & Robbins 1985, Auer et al. 2002, Besbes et al. 2014), and hold in most real-world applications such as demand forecasting and ad CTR prediction.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for the explanation. I still have one question. If the expected reward is bounded by [-R, R], intuitively, this R should directly impact the regret, or, in your context, the steady-state error, such that, this bound should scale with R, as implied by the series of works mentioned in your rebuttal (Lai & Robbins 1985, Auer et al. 2002, Besbes et al. 2014). However, the O(min(..., 2R)) proposed by the authors in the rebuttal still doesn't seem to capture the influence of R on the problem.
---
Reply to Comment 1.1.1:
Title: Rebuttal by Authors
Comment: Thank you for your question! Let us provide further clarifications below.
- The upper bound in Thm 5.1 can be slightly updated to reflect full dependency on $R$, as $O(\min(\sqrt{\log(1/\alpha \sigma)+\log k} \cdot \frac{\alpha\sigma}{\sqrt{1-\alpha^2}} + \mathbf{R \alpha \sigma} , 2R))$. Please refer to Appendix G for the proof of this bound.
- We initially left out the term $R \alpha \sigma$ in our Big-O notation. This was because, with $R$ being a constant, this term has the same dependency on $\alpha$ and $\sigma$ as the main term, $\sqrt{\log (1/\alpha \sigma)+\log k} \cdot \frac{\alpha \sigma}{\sqrt{1-\alpha^2}}$. The main term, however, has an added dependency on $\sqrt{\log (1/\alpha \sigma)+\log k}$, which makes it the dominating term in the Big-O notation. We can update the upper bound in Thm 5.1 to improve clarity. | Summary: This paper studies the problem of non-stationary bandit learning in bandits where rewards have auto-regressive temporal dependency. More specifically, this paper considers bandits where the evolution of the expected mean reward of each arm undergoes an independent AR-1 process truncated to a bounded interval. All arms share the same AR parameter $\alpha$. The authors propose an algorithm Alternating and Restarting algorithm for dynamic AR bandits (AR2), which takes the AR parameter $\alpha$, a stochastic rate of change $\sigma$, epoch size $\Delta_{\mathrm{ep}}$ and a parameter $c_0$ as input. The algorithm “addresses the exploration-exploitation tradeoff by alternating between exploiting the superior arm and exploring the triggered arm within each epoch,” and it “handles the tradeoff between remembering and forgetting” via restarting. To illustrate the efficacy of the algorithm, an upper bound on the regret of AR2 is established, and is compared with a lower bound on the regret. Numerical experiments and a real-world case study on tourism have also been conducted.
Strengths: The paper is well-organized and the exposition is clear and easy to follow. The use of AR-1 model in characterizing non-stationary dynamics is also well-motivated.
Weaknesses: The main weakness of the paper lies in the insufficient justification of the efficacy of the algorithm AR2. Specifically:
1. The paper suggests that a key mechanism of the algorithm is “an alternation mechanism adept at leveraging temporal dependencies to dynamically balance exploration with exploitation,” without discussing the intuition behind this counter-intuitive argument.
- One question I have is that it seems that via alternation, the algorithm explores and exploits at the same intensity. Shouldn’t the intensities change according to the AR-1 parameter $\alpha$ and the stochastic rate of change $\sigma$?
2. The regret upper bound established by Theorem 5.2 doesn’t sufficiently justify that AR2 performs well, due to the somewhat restrictive assumptions.
- The bound does not consider the range $\alpha \in [0, \overline{\alpha})$. Does AR2 perform well in this regime? In addition, I understand that some other naive algorithm performs well in this regime, but Theorem 5.1 cannot fully justify that because the bound on average regret explodes as the number of arms $k \rightarrow +\infty$ (the bound becomes vacuous when $k$ is large enough because rewards are bounded). Also, Theorem 5.1 does not apply to AR2.
- The assumption of $k \leq \mathcal{K}(\alpha)$ suggests that Theorem 5.2 can only be applied when the number of arms $k$ is small. For example, when $\alpha = \overline{\alpha} = 0.5$, the number of arms $\mathcal{K}(\alpha) = 2$: the theorem only applies to bandits with two arms.
- Moreover, the bound on average regret grows in $k^3$, which can be very large when the number of arms $k$ is moderately large.
3. I also have concerns on the numerical experiments.
- One concern is that although the paper suggests that a real-world case study has been completed, experiments were not conducted on the real data, but rather, synthetic data generated by an AR-4 model fitted to the data.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My concerns and questions are raised in the Weakness section. In summary:
1. It would be great if the authors could explain the intuition behind why the alternation mechanism can effectively balance exploration with exploitation, although it seems that the extent to which the algorithm explores is the same as the extent to which the algorithm exploits.
2. Can the authors provide justifications of how well the algorithm AR2 performs when each of the assumptions $\alpha \in [\overline{\alpha}, 1)$, and $k \leq \mathcal{K}(\alpha)$ in Theorem 5.2, is violated, and when $k$ is large?
3. Can the authors explain why experiments are run against synthetic data instead of real data, when real data is provided in the tourism case study?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We’d like to address each of your questions below.
**(1) Regarding the alternation mechanism,**
* We'd like to first clarify that we do not always explore and exploit at the same intensity. At the exploration step, we only pull a triggered arm during odd rounds if the triggered set $\mathcal{T}$ is non-empty (see Alg. 1). Our triggering condition (Eq. 2), defined by AR parameters ($\alpha, \sigma$), naturally determines the rate of exploration.
* Unlike stochastic bandits, the dynamic environment mandates **continuous exploration** alongside continuous exploitation because the best arm keeps changing. Neglecting any arm for too long reduces our confidence in its potential. This makes balancing exploration and exploitation especially challenging (as discussed in Sec. 1), since the rate of exploration needs to be adjusted based on the amount of changes in our environment. Our alternation mechanism is tailored for this task, with the help of the superior arms and the triggered set.
To see that, consider the following two scenarios:
- In slower-changing environments (e.g., with small $\sigma$), the triggered set may not always include arms needing exploration, allowing focused exploitation of the superior arm. For instance, if there are only two arms whose rewards change slowly, one might dominate for extended periods before the other arm meets the triggering condition (Eq. 2). In this case, our alternation mechanism makes exploitation its primary strategy, only exploring occasionally.
- In fast-changing environments where the best arm shifts rapidly, the triggered set is likely to always contain some arms worth exploring. Here, the rate of the exploration can be as high as the rate of exploitation to ensure that we keep track of all arms while not over-exploring.
We will add the above discussion to Section 4 to add more clarity and justification.
**(2a) Regarding the regime of small $\alpha$ and Thm. 5.1,**
* We’d like to clarify that a more accurate statement of Thm. 5.1 should be that **any algorithm** (including the naive algorithm and our algorithm AR2) would incur per-round regret at most $\mathbf{O(\min(\sqrt{\log (1/\alpha\sigma)+\log k}\cdot\frac{\alpha\sigma}{\sqrt{1-\alpha^2}},2R))}$. The proof of Thm. 5.1 does not use any property unique to the naive algorithm. Our initial statement is meant to emphasize that in the small $\alpha$ regime, even the regret upper bound of a rudimentary algorithm matches the lower bound (see Fig. 1).
* We remark that it is reasonable for the bound to scale with $k$ here. Even in vanilla stochastic bandits, the per-round regret scales with $\sqrt{k/T}$, inherently increasing in $k$.
* Given that Thm. 5.1 suggests theoretically sound performance for any algorithm in the small-$\alpha$ regime, we therefore focus on analyzing the performance of AR2 in the large-$\alpha$ regime in Thm. 5.2.
* Moreover, our numerical studies in Appendix A show the superior performance of AR2 in both small-$\alpha$ and large-$\alpha$ regimes compared to various benchmarks.
We will revise the statement of Thm. 5.1 and add clarity in our revised paper.
**(2b/2c) Regarding the number of arms $k$,**
Thank you for your question on (i) the bound $k \leq \mathcal{K}(\alpha)$ and (ii) the dependency on $k$ in Thm. 5.2. We will address both points below.
* The bound $k \leq \mathcal{K}(\alpha)$ is mainly required for the rigor of theoretical analysis. Both this bound $\mathcal{K}(\alpha)$ and the $k^3$ dependency in our upper bound come from the loose upper bound we used for the number of triggered arms (in Lemma H.3, we used the fact that at any given round, there are at most $k-1$ triggered arms). See Line 252-259 of Sec. 5 for more details.
* Theoretically, if at any given round, the number of triggered arms is $O(1)$, the bound $k \leq \mathcal{K}(\alpha)$ would no longer be needed, and our upper bound can be tightened to $O(c_0^2\alpha^2 \sigma^2k\log(c_0\alpha\sigma))$, which exactly matches the lower bound up to logarithmic factors.
* Our numerical studies in Appendix A also reveal that (i) AR2 maintains its competitive performance even when $k$ exceeds $\mathcal{K}(\alpha)$; and (ii) our per-round regret grows modestly with $k$, despite the theoretical $k^3$ dependency. These suggest that the assumption/dependency on $k$ for our upper bound are artifacts of our analysis, rather than an intrinsic property of our algorithm.
For instance, when $\alpha = 0.9$, we’d have $\mathcal{K}(\alpha) = 10.4$, but AR2 remains competitive even when $k=20$. Also, as the number of arms doubles from 10 to 20, the regret of AR2 increases gracefully by 20%, significantly milder than the $k^3$ dependency. See Appendix A for more details.
**(3) Regarding the real-world case study,**
* Using raw data from [36] in our case study wasn’t feasible for two reasons.
- Their data only consists of quarterly arrivals during 1975-1989, amounting to merely 60 data points in total. For our algorithm's learning process to be meaningful, a substantially larger dataset is crucial.
- Our bandit setup seeks to establish competing arms (demand for vacation packages) correlated with the same exogenous variables (the tourism demand). Yet, the provided dataset comprises only a single time series without accompanying data on the demand for vacation packages.
* To address these, we used the AR parameters from [36] to model demand for different vacation packages (arms). The authors of [36] fitted the logarithms of quarterly tourist arrivals to an AR-4 model, validating each lag with computed t-statistics. This model allowed us to craft time series for competing arms over a multitude of rounds.
* Finally, we’d also like to highlight that the main purpose of our case study is to demonstrate the adaptability of AR2 to more complicated time series with real-world characteristics (AR-p processes with trend). This has indeed been shown in our case study.
---
Rebuttal Comment 1.1:
Title: Primary Concerns Remain Unresolved After Reading the Author Response
Comment: Thank you for your detailed response!
Although some of my questions are addressed, my primary concerns remain unresolved.
2 (b/c) From your response, $k \leq \mathcal{K}(\alpha)$ is indeed required as an assumption, and that you agree that $k^3$ is the dependency given by the theoretical results, where $k$ is the number of arms. As I suggested in my review, the assumption is strong, and the dependency of $k^3$ on $k$ is worse than other regret bounds established in the literature.
2 (a) Regarding small $\alpha$ regime.
* Since the paper lets $\epsilon_i(t) \sim \mathcal{N}(0, \sigma)$, in the AR-p setting, the long-run average reward of all arms are zero. This setting is strange because the AR-p setting doesn't encompass stationary environments as special cases. For example, the setting doesn't encompass stationary environments where different arms have different means as special cases.
* Moreover, it is the fact that the long-run average reward of all arms are zero that makes Theorem 5.1 applicable to all algorithms. Indeed, Theorem 5.1 does not even apply to all stationary bandits and all algorithms: the bound converges to $0$ as $\alpha \rightarrow 0$, but an algorithm that always aim to pull the worst arm cannot incur such small regret.
* When Theorem 5.1 cannot be applied to AR2, it is concerning how the algorithm performs in this regime.
3 It is claimed in both the abstract and the introduction that this paper conducts a real-world case study on tourism demand prediction, yet your response suggested that you could not use the raw data in this dataset ....
* Why not use the raw data in other datasets presented in other bandit learning papers? For example, the one used in Chapelle and Li 2011? Or, Zhou et. al. 2020 (https://arxiv.org/abs/1911.04462)?
* Since your algorithm is specifically designed for AR-p bandits, it is unfair to claim that you conduct a real-world case study, while comparing your algorithm with other algorithms designed for general non-stationary bandits on AR-p data (which are generated from AR-p simulators fitted from real data) instead of real data.
1 Could you explain why the algorithm AR2 alternates between 1 exploration period and 1 exploitation period, instead of e.g. have 1 exploration period but 2 exploitation periods? What's the intuition behind this specific alternation pattern?
Thus, I will keep my score as 3.
---
Reply to Comment 1.1.1:
Title: Rebuttal by Authors
Comment: **(2b/c) Regarding the number of arms $k$,**
- Our numerical studies (Appendix A) show that the assumption in Thm 5.2 is **not** necessary and can be relaxed with a tighter analysis. These studies also show that the regret increases gracefully with $k$. The assumption and regret dependency on $k$ are **only** an artifact of our analysis (in particular Lemma H.3) and are not inherent to our algorithm.
- We’d like to ask the referee for their understanding. Our research is in fact the first to investigate AR-based non-stationary bandits under a strong dynamic benchmark, aiming to characterize per-round regret w.r.t. AR parameters. We've devised an algorithm adaptable for different AR-based processes, which shows superior performance against all non-stationary benchmarks, including a follow-up work [39] that AC recently mentioned.
**(2a) Regarding the small-$\alpha$ regime,**
- **Regarding Thm 5.1,**
- We believe that there is a **misunderstanding**. The referee thinks that the reason Thm 5.1 holds is because the long-run average reward goes to zero. This is not true. The reason that Thm 5.1 holds is because for small $\alpha$, the steady-state distributions of expected rewards for all arms cluster around zero. These two statements are not equivalent.
- To see that, consider a case unrelated to our setting, where each arm's reward switches between $\pm1$. Here, the long-run average reward of the arm is 0. But, as the referee suggested, an algorithm that keeps pulling the worst arm would incur a (dynamic) loss of 2 at every single round.
- In the small-$\alpha$ regime, the expected rewards for all arms cluster around zero under the steady-state. This means that at any given round, the gap between the best and worst arms is extremely small, so even an algorithm that always pulls the worst arm won't incur significant (dynamic) loss.
- **Regarding arms having the same steady-state mean of expected rewards,**
- Our setting is more challenging than those with distinct steady-state means of expected rewards. When arms have different steady-state means, there usually exists a dominating arm, and hence an algorithm designed for static benchmark is expected to work well. In our case with equal steady-state means of expected rewards, to achieve a good regret against the dynamic benchmark, we need to closely monitor the expected rewards of each arm over time to select the high-reward arm in every round.
- We further note that Thm 5.1 can be extended to a setting where the arms have the same, nonzero steady-state mean of expected rewards (e.g., consider $r(t+1)=\alpha r(t)+(1-\alpha)c+\epsilon(t)$, which yields a steady-state distribution clustered around $c$ for small $\alpha$). As stated earlier, this is a challenging setting as there does not exist a dominating arm.
**(3) Regarding the case study,**
- Upon a quick look at the papers you suggested, [Zhou et al. 2020] used data from UCI Machine Learning Repository, which are mainly classification datasets, so it is not clear if there exists AR-based temporal structure. The display advertising data in [Chapelle & Li 2011] can be potentially useful; however, the data is not publicly available, as claimed by the authors of “Estimating rates of rare events with multiple hierarchies through scalable log-linear models”, who used the same data.
- We've explained the needs for certain data simulation due to limitations in the available data. We’d like to further add that using AR-p simulations fitted from real-world data ensures a controlled experimental environment, enabling us to rigorously assess the benefits of leveraging knowledge of the temporal structure.
- **Regarding benchmark algorithms in our case study/numerical studies**, until recently, there did not exist any algorithm designed specifically even for AR-1 process. We've already taken one step further and modified the UCB algorithm based on the AR process (see mod-UCB, Appendix C). See also the predictive sampling (PS) algorithm, which we tailored for our AR-1 setup, that we simulated in our response to the AC.
This necessitates comparisons with algorithms for general non-stationary MAB, under an AR-based setup. Our choice of the dynamic benchmarks includes RExp3 and sliding window UCB/TS (see Appendix A), similar to related follow-up work [39].
**(1) Regarding alteration mechanism,**
- Our current alternation mechanism is a design choice, with both theoretical and empirical evidence confirming its efficacy.
- Adjusting the exploration/exploitation pattern (e.g., 2 rounds of exploitation and 1 round of exploration) would result in the same theoretical upper bounds and we'll comment on this.
- The true value of our alternation mechanism lies in its adaptability to various time-varying environments through the triggering condition. In slowly-changing environments, exploitation naturally increases, while in fast-changing environments, the mechanism increases the amount of exploration. | Summary: This paper studies a non-stationary bandits problem where the reward rate of each arm evolves according to the AR(1) model, i.e., shrinks by a known multiplicative factor and adds an indep noise. They proposed an algo that strikes a balance between (i) exploration v. exploitation and (ii) remembering v. forgetting. As their main results, they showed that this algo achieves nearly optimal regret. Moreover, they presented a real world case study.
Strengths: - the model is novel, neat and fundamental.
- provided a nearly optimal regret
- presented a high quality real world case study
- the manuscript is well-written
Weaknesses: - Since the main term in the regret is sigma, it is better to specify the function g in the lower bound (Thm 3.1), especially how it depends on sigma. After skimming the appendix, I am still not 100% sure whether the claimed UB and LB really match.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - To ensure boundedness of the mean reward, you truncated the AR process. Why is this wlog?
- this problem is reminiscent of the Slivkins Upfal COLT'08 paper - that paper also considered stochastically evolving reward rates, truncated at the boundary, and their results mainly depends on \sigma. Can you comment on the connections between these two papers? E.g. does the S-U paper implies any preliminary results for your problem?
- Can this problem be approximately viewed as RL? Specifically, assuming alpha is a constant, then the history older than log # rounds has little impact on the current reward rate. Can we encode the log n - step history as the state? (If so, the transition matrix is also known) Can we immediately obtain a regret bound using existing RL results?
- the theoretical part considers AR(1) but the case study considers AR(4). Is this difference essential?
other comments:
line 145: "2exogenous" - is this a typo?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: (typesetting issue) when I click on a reference, the pdf file does not jump to the reference page
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback! We’d like to address your questions below. We will incorporate the following discussions and fix the typo in our revised paper.
**(1) Dependency of function $g$, upper and lower bounds on $\sigma$.**
* Our function $g(k, \alpha, \sigma)$ represents the probability that two best arms are within $\alpha\sigma$ of each other under the stationary distribution. To see how $g$ evolves in terms of $\sigma$, we can consider the scenarios of small $\alpha$ and large $\alpha$ separately.
As depicted in Figure 3 in Appendix D, when $\alpha$ is small, the stationary distribution resembles a normal distribution with standard deviation proportional to $\alpha\sigma$; while when $\alpha$ is close to one, the stationary distribution resembles a uniform distribution. Hence, when $\alpha$ is small, we expect $g$ to be roughly constant in terms of $\sigma$, while when $\alpha$ is close to one, $g$ should be roughly linear in terms of $\sigma$.
Our statement above is confirmed in _Figure 5 in the PDF of the global response_, where we numerically computed the values of $g$ for different $\sigma$ under the two scenarios ($\alpha = 0.4$ and $\alpha = 0.9$) respectively.
* Having seen the evolution of $g$, we can see that our upper bound (UB) and lower bound (LB) indeed match in terms of $\sigma$.
- For the regime with small $\alpha$, we illustrate the evolution of our UB and LB in _Figure 6 in the PDF of the global response_. This plot reveals that our UB and LB have the same trend of increase in terms of $\sigma$ in the small-$\alpha$ regime, which complements our result in Figure 1 of Section 5 (that shows our UB and LB match in terms of $\alpha$ in the same regime).
- For the regime with large $\alpha$, we have already illustrated the evolution of UB, LB and the per-round regret of AR2 in Figure 4(b) of Appendix E, which again shows that the UB and LB follow the same trend of increase with respect to $\sigma$.
**(2) Boundness of rewards and our truncation approach.**
* Here, we adopt a truncating approach at the boundary $[-R, R]$ mainly because it is a simple and natural method to ensure boundedness of the expected rewards. However, it's important to note that this truncating boundary is merely one of the many we're equipped to handle. Our main results would remain valid across various boundary conditions such as reflecting or absorbing boundaries, with slight changes in the PDF/PMF of the stationary distribution (see Appendix D for our discussion).
* In our setting, in the absence of such boundedness, the variance of expected rewards under the stationary distribution would go to infinity as $\alpha$ goes to 1, and it then becomes impossible to design any useful algorithm for such a volatile environment. Further, note that the boundedness of rewards is a common assumption in the MAB literature (e.g., Lai & Robbins 1985, Auer et al. 2002, Besbes et al. 2014), and hold in most real-world applications such as demand forecasting and ad CTR prediction.
**(3) Comparison with [Slivkins et al. 2008].**
* [Slivkins et al. 2008] mainly studies non-stationary bandits where the rewards evolve as Brownian processes. Their setup can be viewed as a special case of the AR model where $\alpha = 1$ (see our comment in Section 1.2). In fact, if we simply take $\alpha = 1$, our method and results are also valid for the Brownian process, and our upper/lower bounds would share the same dependency on $\sigma$ as their upper/lower bounds.
* However, our general AR model studies a broader spectrum of environments that accommodates $\alpha \in (0,1)$. As such, the algorithm designed for Brownian bandits is not directly applicable for general AR bandits. One key challenge that arises from the AR model is that the correlation between past and future information now decays exponentially fast. Hence, the design of our algorithm is different from theirs in two important aspects:
- While the algorithm of [Slivkins et al. 2008] can seamlessly leverage past observations from any arm to estimate its current state, we have to factor in the exponential decay of past values we've observed. This impacts our definition of the estimated reward, triggering condition, and the rate of the alternation mechanism.
- Our approach also mandates periodic restarts to judiciously discard outdated information. This becomes crucial for the general AR setup as any older data can swiftly become misleading.
**(4) Viewing the problem as an RL problem.**
* Traditional RL methods (e.g., Q-learning) operate best with limited, finite state space. In our case, following your suggestion, the state would be a $\log(T)$-dimensional vector, where each coordinate can be any real number between $[-R, R]$. That is, the size of the state space is infinite, and hence such an algorithm cannot be implemented in polynomial time. Discretizing the state space could be a potential alternative, but the impact of such discretization on regret remains unclear given how the AR process evolves.
Nonetheless, we really appreciate your suggestion and believe that it is certainly a promising area for further exploration. We will mention it in our future works section in the revised paper.
**(5) Our theory and the case study.**
* We would like to remark that our theory and case study serve different purposes. We develop our theory around AR-1 processes mainly to shed light on the design of our two main mechanisms (alternation and restarting) and show near-optimally of our algorithm with theoretical rigor. On the other hand, our case study highlights the adaptability of AR2 to more complicated time series observed in the real-world (AR-$p$ processes with trend), underscoring its robust performance. Our hope is that the combined insights from our theoretical and empirical findings would inspire more works that explore bandits in a time-varied environment.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to all reviewers for your valuable and insightful feedback! We have carefully addressed each reviewer’s comments and questions below. For Figures 5, 6 and Table 3 crafted during the rebuttal period, please kindly refer to the attached PDF in this global response. We will incorporate all clarifications and discussions in our response into the revised version of the paper.
Our work is, to our knowledge, the first that studies non-stationary bandits with reward distributions governed by time series that encapsulates real-world characteristics, specifically the AR process. Our aim is to identify the major challenges inherent to such rapidly changing environments commonly observed in practice, and propose effective mechanisms for addressing them. We hope that the high-level ideas shown in this paper would provide insights for any future research that explores bandits with other types of temporal structures.
Once again, thank you for your time and thoughtful contributions to improving our work!
Pdf: /pdf/8ba037fd346ca8fb5eb837c5bcb3235a981df3e6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Optimistic Active Exploration of Dynamical Systems | Accept (poster) | Summary: This paper addresses the problem of active exploration of a (Markovian) dynamical system with continuous states and actions. In the absence of any cost function, the proposed objective is the maximization of the one-step information gain on the dynamics model. The paper presents an algorithm, called OpAx, to maximize the introduced objective by alternating optimistic planning with the current estimate of the dynamics model and data collection with the resulting policy to improve the model estimate. Then, the paper provides a convergence analysis of OpAx, which provably converge to the true model asymptotically under various assumptions. Finally, the OpAx algorithm is evaluated against some relevant baselines in continuous control tasks.
Strengths: - The paper tackles an active exploration problem for system identification that is of general interest to both the reinforcement learning and optimal control communities;
- The algorithm presented in the paper is simple and intuitively sound;
- The paper combines theoretical justification with empirical validation in challenging domains;
- The paper is well-presented and looks rigorous in the theoretical statements.
Weaknesses: - The paper is mainly motivated as a tool for system identification that can be used in preparation to solve RL or control tasks, but this motivation looks somewhat weak for the RL side;
- The experimental results are not terrible, but I can hardly see improvements over simpler baselines, such as one using planning with the average model instead of optimism;
- The paper neglects several related works in RL, especially reward-free RL and active model estimation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: This paper looks like a sound and neat work in system identification through active exploration and optimism. I do not have major concerns about the quality of the paper per se. However, judging from an RL perspective, this contribution looks two/three years late. Several recent works have demonstrated how to learn a model of the transition dynamics that is just good enough for planning by taking a polynomial number of samples from the process, which goes under the name of reward-free exploration. Instead, this paper takes the hardest route of trying to reduce the epistemic uncertainty "uniformly" over the dynamics model, which is arguably far-fetched without strong reachability assumptions.
For this reason, I struggle to see how this paper contributes to the advancement of RL research, as it does not compare well with reward-free exploration in terms of theoretical guarantees, neither significantly improves over previous practical methods empirically.
However, this might be a useful contribution for the control community, and I am open to raise my score if the authors could show me that.
I report below some detailed comments.
**Dynamical Systems and MDPs**
The paper motivates the approach as a tool for learning the dynamics model to then solve RL tasks. However, the standard model for RL is the MDP, while the paper focuses on a specific type of Markovian processes in which the transitions are given by a deterministic function plus a Gaussian noise. Can the authors explain how this model relates with MDPs, and the relative expressive power of the two?
**Well Calibration Assumption**
Whereas Ass. 1 might be standard in the control literature, which I am not familiar with, it is kind of black-box from an RL perspective. It looks like it is saying that the problem is learnable, but I would like to know under which conditions on $f^*$ this is the case.
When is the Ass. 1 violated in practice? Does this implicitly assume strong reachability? Can the authors relate the Ass. 1,2,3 to the common structural assumptions on the MDP model?
**Convergence Results**
The theoretical results focus on showing that the maximum information gain shrinks with $N$, but the rate is actually exponential in $T$. This means that the sample complexity of learning an approximately correct model is also exponential?
**Experiments**
The OpAx algorithm does not seem to improve significantly over the Mean-AE baseline in the considered experiments. While OpAx arguably comes with a stronger theoretical justification, I would like to see some benefit in the experiments as well, or at least a theoretical result showing that planning with the mean model cannot be provably efficient under the considered assumptions.
**Related Works**
The paper neglects a recent yet considerable stream of works in reward-free RL, which seems to have overlapping motivation with their active exploration problem. This stream of works started with (Jin et al., Reward-free exploration for reinforcement learning, 2020) and counts dozens of subsequent results for tabular models, linear models, and general function approximation. Can the authors relates their contribution with reward-free RL literature?
Another important work that seems to be missing is (Tarbouriech et al., Active model estimation in Markov decision processes, 2020), which looks also very close to the problem formulation of this paper, although it focuses on MDPs rather than dynamical systems with Gaussian noise.
**Minor**
- The paper refers to the Appendix A for the proof of Lemma 1, but I cannot find it;
- I would report confidence intervals instead of standard error in the plots.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not explicitly discuss the limitations of this work in terms of empirical results or theoretical guarantees.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and for sharing other relevant work. Based on their summary, we believe that the reviewer misunderstood our objective. In particular, our objective is *not the one step but $T$-step information gain on the model*. This is a crucial difference since it allows us to quantify the information gained over a whole rollout instead of just one transition.
## Weaknesses:
*W1 Motivation from RL side*: There is a plethora of work on unsupervised RL that studies our problem setting [7-12]. Generally, systems such as robots are used to perform multiple tasks that are often not known apriori. Prior works such as [5] motivate the problem in the context of reward-free RL but with rewards coming from a known reward class. This setting, however, needs a finite covering number of the reward class and is not practical for our general problem.
Traditionally, for a given dynamics model, well-established model-based controllers are commonly used to optimize for any task. Therefore, having a generalist approach to efficient exploration and identification of the dynamics *uniformly* is typically preferred.
## Additional Questions:
*Q1 Dynamical Systems and MDPs*: We consider an MDP model with an infinite number of states and actions, unlike in [13, 14]. We use a classical textbook definition of a dynamical system [6], which includes a large class of systems in RL (MUJOCO physics simulator, etc., ).
*Q2 Well Calibrate Assumption*: Well calibration is a standard assumption in model-based RL with continuous states and action spaces. It has been used for both linear and nonlinear systems [1-4]. For the RKHS settings, this assumption is satisfied (cf., Lemma 2). For BNNs, proving well-calibration is an open problem (cf, line 108-112). The well-calibrated assumption has no connection to strong reachability. This assumption intuitively tells that the algorithm is able to learn a mean as well as the confidence region around the mean function.
*Q3 Intuition of Assumptions 2 and 3*: Assumption 3 is a common assumption on the transition noise, which can further be relaxed to the even more general heteroscedastic noise case (cf., line 123). Finally, Assumption 2, makes smoothness assumptions on the dynamics that are common in nonlinear systems literature [6].
*Q4 Convergence Results*: The rate of convergence depends on $N$ so the sample complexity (as a rate) for kernels such as RBF, linear, etc is of polynomial order (cf.Theorem 2). Under our assumptions, the dependence on the horizon T is exponential, similar to [3]. However, that doesn’t influence the rate. As also argued in [3], to obtain better dependence on the horizon T, we need stronger assumptions (one such assumption is shown in Appendix B where we obtain polynomial dependence on T).
*Q4 Experiments*: The main contribution of our work is theoretical. In [3] the benefits of optimism are discussed in further detail. Here it is shown that optimistic planning performs similarly to mean or PETS planning when the reward landscape is not scarce. Since this is the case for our intrinsic rewards, we empirically observe similar performance. Nonetheless, Mean and PETS, until now, have no theoretical guarantees for the general class of systems we consider. Deriving such guarantees for them is an open research problem.
*Q5 Related Works*: We thank the reviewer for sharing additional related works. We have included them and more works from reward-free RL in the revised paper. In our work, we consider a general class of dynamical systems for which we make very common/practical assumptions. As we mention in the general comment, theoretical results of our kind, do not exist.
*Q6 Proof of Lemma 1*: Cf., line 516.
Having addressed all of the questions provided by the reviewer, and given the contributions of this paper, we kindly ask the reviewer to increase their evaluated score for our paper. We would be happy to answer any remaining questions or concerns.
## References:
[1] Chowdhury, Sayak Ray, and Aditya Gopalan. "On kernelized multi-armed bandits." ICML, 2017.
[2] Yasin Abbasi-Yadkori et al (2011). Regret bounds for the adaptive control of linear quadratic systems. COLT.
[3] Curi, S., et al (2020). "Efficient model-based reinforcement learning through optimistic policy search and planning." NeurIPS.
[4] Abeille, M., et al (2017). "Thompson sampling for linear-quadratic control problems." AISTATS.
[5] Chen, J. et al (2023). On the statistical efficiency of reward-free exploration in non-linear rl. NeurIPS.
[6] Khalil, H. K. (2015). Nonlinear control, volume 406. Pearson New York.
[7] Schmidhuber, J. (1991). A possibility for implementing curiosity and boredom in model-building neural controllers. In Proc. of the international conference on simulation of adaptive behavior: From animals to animats.
[8] Wagenmaker, A. et al (2020). Active learning for identification of linear dynamical
systems. COLT.
[9] Mania, H. et al (2022). Active learning for nonlinear system identification with guarantees. JMLR.
[10] Pathak et al (2019). Self-supervised exploration via disagreement. ICML.
[11] Sekar, R. et al (2020). Planning to explore via self-supervised world models. ICML.
[12] Sancaktar, C., et al (2022). Curious exploration via structured world models yields zero-shot object manipulation. NeurIPS.
[13] Jin, C. et al (2020). Reward-free exploration for reinforcement learning. ICML.
[14] Tarbouriech, J. et al (2020). Active model estimation in markov decision processes. UAI.
---
Rebuttal Comment 1.1:
Title: Follow up on rebuttal
Comment: We hope we could address your concerns adeptly. We would further like to emphasize how our work differs from reward-free exploration and the practical methods we consider for our evaluation.
1. Our work considers a very general class of dynamical systems in continuous state and action spaces. As we highlight in our rebuttal, our assumptions are common in both control and learning literature. Compared to reward-free RL (referring to the papers we discuss in the general comment and [3]), our assumptions are more practical and general.
2. We propose *a practical algorithm with theoretical guarantees* for this very general setting. We show that our algorithm works on several real-world systems of varying state and action space dimensions. Compared to other practical methods, our algorithm comes with first-of-its-kind theoretical guarantees. Such guarantees did not exist before for our setting. This is also discussed by other concurrent works ([1, Paragraph 1 in the Introduction] [2, Paragraph 2 in the Related Work]).
We have updated the paper to further highlight our contribution. It is much appreciated if you could please reconsider your assessment, or respond with questions/suggestions so that we can improve the paper in this regard.
[1] Chakraborty, S. et al (2023). STEERING: Stein Information Directed Exploration for Model-Based Reinforcement Learning. arXiv preprint arXiv:2301.12038.
[2] Wagenmaker, A. et al (2023). Optimal Exploration for Model-Based RL in Nonlinear Systems." arXiv preprint arXiv:2306.09210.
[3] Chen, J. et al (2023). On the statistical efficiency of reward-free exploration in non-linear rl. NeurIPS.
---
Rebuttal Comment 1.2:
Title: Follow-up comments
Comment: I want to thank the authors for their thorough rebuttal.
While I have got clarifications on various aspects, I feel like my main concern on the motivation of the work to be somewhat unresolved. I would be happy to raise my score if the authors can convince me on that.
**1. Learning the model**
The rebuttal says *"traditionally, for a given dynamics model, well-established model-based controllers are commonly used to optimize for any task. Therefore, having a generalist approach to efficient exploration and identification of the dynamics uniformly is typically preferred."* If the goal is to solve RL tasks with the learned model, I struggle to understand why one should care about reducing the epistemic uncertainty uniformly over the state/action space. I think it is clear that reducing the epistemic uncertainty in a state that cannot be reached with meaningful probability by any policy is less important than reducing the epistemic uncertainty in a state that is easier to reach. Can the authors tell me what is the flaw in this thought? It is great to provide the first *"theoretical results of this kind"*, but I would like to understand why they matters for RL.
**2. Reward-free RL**
To develop on the previous comment, reward-free RL objective instead focus on learning the model "where it matters". In the global response, the authors are saying that their model assumption is strictly more general than what has been considered in reward-free RL. If this is the case, can they specialize their sample complexity result to have a direct comparison with those works (e.g., in linear MDPs, low-rank MDPs...)? From my understanding, the reward-free literature also considered very general MDP classes, such as in (Qiu et al., On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game, 2021) and (Chen et al., On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL, 2022) also mentioned by the authors. How does their model assumption generalize over them?
**3. Active model estimation**
Can the authors discuss how their objective relates with the one in (Tarbouriech et al., Active model estimation in Markov decision processes, 2020)? Do they consider a generalization of the latter objective for a larger class of MDPs or there are other menanigful differences? (Note that the reviewers cannot see the revised version of the paper).
**4. Theoretical ground for heuristic methods** In their rebuttal, the authors are providing a compelling argument for the motivation of this paper as a way to provide theoretical ground for heuristic methods that have been extensively used in practical RL (e.g., curiosity-driven approaches). Can the authors develop on this?. To me, this looks a stronger motivation, but it is also sidelined in the presentation.
I am sorry for the late reply, but hopefully there will still be sufficient time to conclude the discussion.
---
Reply to Comment 1.2.1:
Title: Response on the comment
Comment: We thank the reviewer for the discussion.
**1. Learning the model** 1) The main benefit of learning a general model is that once we have an accurate model of the system, we can use it with any model-based or model-free method to quickly obtain a policy for any new reward/downstream task interacting with the system. For example, consider a home robot that is asked to solve unknown tasks specified by the user (human). With OpAx the robot can independently explore and learn the environment and quickly solve any tasks when dictated by the user without further exploration. 2) We would also like to emphasize that our algorithm **explores only the state action pairs that are reachable** under the policy class $\Pi$. Accordingly, our algorithm only reduces uncertainty in the reachability set (cf., Theorem 2) and does not explore unreachable state action pairs. We also make no assumptions about the downstream tasks and their policy classes. Nonetheless, if additional information on the tasks and their policy class is available, we can also restrict the policy class (and exploration) for OpAx to the set of policies (state-action pairs) that are relevant to the downstream tasks.
**2. Reward-free RL** 1) We assume dynamics of the form $x_{t+1} = f(x_t, u_t) + \epsilon_t$. For the case of linear MDPs or low-rank MDPs, a special structure on the function $f(x_t, u_t)$ is assumed. In the case of linear MDPs ($f(x_t, u_t) = \phi^T(x_t, u_t) \mu(x_t)$), if the unknown embedding $\mu$ satisfies Assumption 4 (RKHS setting), our sample complexity bound remains of the same order with differences arising only in constant factors. 2) Indeed, Qiu et al., 2021 consider nonlinear approximations. However, they make a very different set of assumptions. Particularly, they make continuity assumptions on the value function of all possible rewards (Assumption 3.2). This inherently implies a restricted reward class. Furthermore, the resulting algorithm requires a covering number of the resulting Q-function class (section 3.3). Jin et al., 2022 also take a similar route with the covering number and making assumptions for all possible rewards (cf., Assumption 1, 2 and Definition 1). Our set of assumptions are very different. 1) We make **no assumptions about the rewards and their class**. For our setting, we typically have no knowledge of the underlying downstream tasks and their rewards. 2) Our assumptions are made only on the dynamics model. Dynamical systems such as robots are driven by physical principles for which we typically have a better understanding. Thus, we believe making assumptions only on the dynamics is more natural and is often also done in the learning-based control community [1]. We added this discussion in the revised version.
**3. Active model estimation** Tarbouriech et al., 2020 design an algorithm that minimizes model estimation error (cf., Section 3.1). They operate in finite state action MDPs. In this setting, the model estimation error is inversely proportional to the state action visitation count (Proposition 2). **We are in a continuous state-action space**. In this setting, there are infinitely many states and actions and we cannot use the state action visitation count. For the continuous state and actions, the equivalent of model error is the model epistemic uncertainty. We use this as our objective to guide the exploration. In addition to the objective, we also plan in continuous spaces under unknown dynamics. We relate the regret of planning to the decay of the model epistemic uncertainty and give the sample complexity bound (proof of Theorem 1). Compared to the finite state action MDP case, this requires a completely different set of tools. In summary, Tarbouriech et al., 2020 consider a discrete state action space and we consider a continuous one. Therefore, the tools for both are different and neither generalizes the other.
**4. Theoretical ground for heuristic methods** We thank the reviewer for acknowledging that our argument is compelling. Indeed, a key contribution of our work is using ideas from experiment design and information theory to derive our intrinsic/exploration reward which is the epistemic uncertainty. This reward is commonly used by practical curiosity-driven RL methods and through our derivation we give theoretical grounds for such approaches. In addition to this, we show that combining this objective with optimism gives theoretical guarantees. We are the first to give such a result. Following the reviews, we updated the paper to emphasize more on this aspect of our contribution.
We hope our response addresses the reviewer's concerns and are happy to answer further questions.
[1] Hewing, Lukas, et al. “Learning-based model predictive control: Toward safe learning in control.” Annual Review of Control, Robotics, and Autonomous Systems 3 (2020). | Summary: The paper presents an active exploration method (OPAX) for dynamics model learning. The method seeks to maximize information gain, while being optimistic about unknown dynamics with respect to the achievable information gain. Theoretical results establish a connection between information gain and model complexity, and a convergence guarantee for GP models. Experimental results compare OPAX to baselines in several control and manipulation domains.
Strengths: To my knowledge, the paper's principled approach to derive the planning objective in eqs. (6) and (7) from the optimal design perspective is novel. The paper is carefully written and easy to follow. The experiments are relevant and the selection of baselines seems fair. I believe that improving the data efficiency of learning dynamics models for zero-shot task generalization is a relevant research objective.
Weaknesses: The approach does not really outperform the relatively the baseline methods used in the experiments. I don't think that's a huge problem, as I believe the main contribution of the paper is its theoretical part. Still, I think the paper could be improved by explaining better why the optimism does not lead to a significant difference in model performance (OPAX vs PETS-AE and Mean-AE).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Can you identify experimental regimes in which the optimism has a significant impact on the performance of the resulting model? Even if such an experiment might be a little cherry-picked, I think it would still be useful for the reader.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: There is no detailed discussion of limitations anywhere in the paper. I think adding this would improve the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and acknowledge that the reviewer correctly highlighted the strengths of our work. Below we have addressed the reviewer’s concerns.
## Weaknesses:
*W1 Outperforming baselines*: We are happy that the reviewer recognized that the main contributions of the paper are theoretical. In [1], the benefits of optimism are discussed in more detail. Particularly, they show that optimism helps in settings with scarce rewards, whereas for non-scarce rewards PETS and optimistic planning perform on par. Since our exploration objective by definition is not scarce (i.e. the reward is the uncertainty which is mostly non-zero everywhere), we observe that OPAX performs similarly to the baselines. We have added this discussion to the revised paper.
## Additional Questions:
*Q1 Experimental regimes where optimism helps*: As highlighted in our response to W1, optimism has an impact on settings with scarce rewards. Our proposed objective is not scarce by definition. However, we could penalize large actions in our objective, i.e., augment our reward to:
$$
r(s_t, a_t) = \log\left(1 + \frac{\sigma_n^2(s_t, a_t)}{\sigma^2}\right) - \lambda ||a_t||^2.
$$
In this setting, scarcity in the rewards can be induced by picking large values for $\lambda$. Practically, we penalize/avoid exerting large actions on the system. We compare OpAx to PETS-AE on the pendulum environment for a specific choice of $\lambda=5$ to give the reviewer an intuition, cf., Figure 1 in the attached document. In the figure, it is visible that in this setting, optimism helps in solving the problem faster. We have also added this simple experiment to the appendix of the revised paper.
## Reference
[1] Curi, S., Berkenkamp, F., and Krause, A. (2020). Efficient model-based reinforcement learning | Summary: The paper presents some insights into active exploration for model-based reinforcement learning. For certain kinds of environments, the authors show a convergence guarantee for model uncertainty. They augment their analysis with an empirical study of an agent using their approach OpAX.
Strengths: The authors point out an interesting research direction. They motivate their approach well and built up almost all of their mathematical framework. The empirical study manages to give a broad picture of the approach's performance.
Weaknesses: The empirical part of this paper is connected only weakly to the theoretical part. Where are the proven convergence properties in the study? Picking domains where edge cases of the convergence properties might be observed, would help the paper. The authors state that the approach they show in the empirical study is very similar to other approaches found in literature, but they show no direct comparison. Why is the empirical study part of this paper, then?
The authors' interpretation of the results is somewhat lavish. The performance of the main approach is very similar to other shown approaches and clear advantage is not shown. The comparison on downstream tasks is important to stress the overall benefit of intrinsic reward, but also to be expected. It is not discussed how the competitiveness or even advantage of the said approaches arises for the "high-dimensional task" (Fig. 4). Is there no price to pay for the more involved approach? Why are the baselines shown without training times? At some point, the authors argue that their approach is not limited to domains where simple assumption of physicality might help. But why are all tested domains strictly physical, then?
Errors:
- throughout the paper: "c.f." --> "cf."
- line 65: "since" --> "when"
- line 133: "since" --> "since," (add comma)
- lines 168ff switch to $f^\star$ from $f^*$
- line 203: "Theorem 1;" --> "Theorem 1:" (use colon)
- line 212: "RKHS, however"--> "RHKS; however," (use semicolon and comma)
- line 222f: "deep mind" --> "DeepMind"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (also see "weaknesses")
Why does the presented approach work as well as it does? Are there no trade-offs involved?
What does the theoretical argument say about other types of intrinsic reward?
Where is the novelty in the empirical study?
What experiments would be necessary to show the borderline cases of the convergence property?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss the approach's limitations quite well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and for pointing out our typos. We have addressed them in the paper.
### Weaknesses
*W1 Empirical evaluation of theoretical results*: We designed the pendulum experiment (line 242) on GPs, i.e., RKHS setting, precisely to evaluate our theoretical findings empirically. In our experiments, we show in the example of the pendulum environment how fast our algorithm reduces the model epistemic uncertainty in the full state space of the pendulum. Moreover, in this experiment, we consider both GP models with an RBF kernel and BNNs using probabilistic ensembles. In Figure 1., we show that our algorithm monotonously decreases the epistemic uncertainty with the number of episodes N for the GP case, i.e., validating the findings of theorem 2. Furthermore, for the BNN case, we also show the reduction in the epistemic uncertainty which corresponds to an empirical study of theorem 1.
*W2 Similarity of baselines*: We compare with a similar baseline which is [1]. Our intrinsic reward, the sum of log epistemic uncertainty, is similar to the one used by [1, 2] (they use the sum of epistemic uncertainty instead of the log). However, [1, 2] do not use the optimistic planner. Moreover, the two most commonly used planners in RL are; 1) mean (using the mean model) and 2) PETS [3]. We compare our algorithm to these for the same intrinsic reward (baselines: MEAN-AE, PETS-AE), where we show that our method performs on par with these baselines while also providing theoretical guarantees.
Furthermore, in Appendix D, we also compare the exact intrinsic reward from [1, 2], i.e., the sum of epistemic uncertainty without the log to ours. Here, we demonstrate that both choices of intrinsic rewards perform similarly. Additionally, we show that the intrinsic reward from [1, 2] is also theoretically sound (Lemma 11, Appendix D) when combined with optimistic planning from OpAx.
*W3 Empirical performance of OpAx and baselines*: The strength of our work lies in the theoretical guarantees. In the empirical results, we show that our algorithm performs at least on par with our baselines that do not yield such theoretical guarantees.
*W4 High-dimensional task study*: For the high-dimensional task, the state space is considerably larger (58D), which makes optimizing over hallucinated controls $\eta$ challenging (dimension of $\eta$ is equal to the dimension of the state). In Appendix C.2.2, we propose a heuristic variant for OpAx, which does not scale with the state space size, and therefore can handle high-dimensional systems. Our results indicate that the heuristic variant also performs well. We have updated the main paper to clarify this further.
*W5 Training Budget of OpAx vs Baselines*: All algorithms are given equal compute budget (cf., Appendix C). Our approach requires policy optimization over a larger domain space that includes the actions and the hallucinated actions $\eta$. However, as we list in Appendix C, for all our baselines we use the same hyperparameters (training, optimization steps, optimizer samples, etc.) for fairness. Therefore, all methods had equal training budgets.
*W6 Limitation to domains where the simple assumption of physicality hold and evaluation*: We would appreciate it if the reviewer refers us to the line where the argument is made. Then we can address the comment in detail. We study general dynamical systems, which we also consider in our evaluation.
## Additional Questions:
*Q1 Trade-offs for OpAx*: The main practical trade-off of the algorithm lies in the optimization problem, which is now performed over a larger space that includes the hallucinated controls. In theory, we assume an oracle that solves the problem, in practice this is challenging, for instance, due to compute limitations. To this end, we propose a heuristic variant of our approach in Appendix C.2.2 and show that it works for the high-dimensional setting.
*Q2 Other intrinsic rewards*: Essential to our algorithm and theoretical analysis is the choice of our intrinsic reward. Could the reviewer clarify this question? Is the reviewer asking about theoretical guarantees of another algorithm with another objective?
*Q3 Borderline case of convergence*: Could the reviewer clarify what is meant by a borderline case? Our results give a bound on the model epistemic uncertainty in the worse case (including the borderline case). For kernels such as RBF, this implies convergence for any function in its RKHS.
Having addressed all of the questions provided by the reviewer, and given the contributions of this paper, we kindly ask the reviewer to reconsider the assessment for our paper. We would be happy to answer any remaining questions or concerns.
## References
[1] Pathak et al (2019). Self-supervised exploration via disagreement. In International conference on machine learning.
[2] Sekar, R. et al (2020). Planning to explore via self-supervised world models. In International Conference on Machine Learning.
[3] Chua, K., et al. (2018) Deep reinforcement learning in a handful of trials using probabilistic dynamics models. Advances in neural information processing systems. | Summary: This paper studies provable exploration in model-based reinforcement learning and proposes an algorithm with optimistic active exploration based on information gain. Theoretical results and experimental results are provided to support their method.
Strengths: 1. The paper presents a practical algorithmic implementation of active exploration with optimism and information gain, by incorporating the techniques from Curi et al. [1] to introduce a hallucinate policy.
2. The authors provide thorough theoretical and experimental analysis.
3. The paper is well written and easy to follow.
[1] Sebastian Curi et al. Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning.
Weaknesses: 1. My biggest concern is the novelty of this paper. Provable exploration based on optimism (mutual information or uncertainty) is not novel. Although the proposed algorithm provides an efficient way for practical implementation, which is good, the novelty is still limited compared to H-UCRL, which is also an optimism-based MBRL algorithm with the same hallucinate policy technique.
2. The authors claim that their method can achieve better zero-shot performance compared to baselines. But it is not clear why is this. Can the authors explain in more detail? The current theory does not indicate this result.
3. What makes the algorithm better in terms of generalization ability compared to H-UCRL? They seem to be developed from very similar optimism perspectives with the same hallucinate technique.
4. From the experimental results, it seems that H-UCRL outperforms the proposed algorithm in most training tasks, despite that the proposed algorithm generalizes better (again, more explanation needed). Can the authors comment on this?
5. Can the authors provide the training curves of H-UCRL and CEE-US instead of asymptotic performance?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness section above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: More discussions on related works and discussions on the zero-shot generalizability are needed to better characterize the contribution of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. Below we address the reviewer’s concerns.
### Weaknesses
*W1 Novelty of our work*: We refer the reviewer to the author rebuttal section. If the reviewer is still concerned with the novelty of our work, we’d be happy to receive more detailed feedback on this concern. Additionally, a key component of our algorithm is active exploration which H-UCRL does not perform, i.e., our algorithm takes a generalist/uniform approach towards exploration of dynamical systems, whereas H-UCRL takes a specialist one (see W3).
*W2 Baselines*: We consider four baselines PETS-AE, MEAN-AE, random, and H-UCRL. Random uniformly samples actions from the action space and does not use any intrinsic rewards to guide exploration. Accordingly, it underperforms. PETS-AE and MEAN-AE perform well in all environments. They use our proposed intrinsic rewards but use a greedy (not optimistic) planning approach that has no theoretical guarantees, except for linear systems [1]. Our method comes with strong theoretical guarantees, while empirically performing on par with SOTA baselines.
*W3 H-UCRL Baseline and zero-shot performance*: H-UCRL is a task-specific model-based RL algorithm therefore its exploration is guided for the specific task it is trained for. Accordingly, its learned model does not generalize to novel unseen tasks. This is particularly observable in Figure 2, where we show that H-UCRL does not perform well on new tasks.
OpAx is a task-agnostic algorithm that performs undirected exploration. Therefore, OpAx is a generalist. Accordingly, OpAx achieves on-par performance with H-UCRL on the tasks H-UCRL is trained for, however, it outperforms H-UCRL on novel unseen tasks.
*W4 Training curves for H-UCRL and CEE-US*: We provide the training curves for H-UCRL for the swimmer environment in order to explain this trade-off between the specialists H-UCRL and generalists OpAx (cf., Figure 2 in the attached document). From the figure, it is noticeable that H-UCRL achieves higher rewards for the tasks it is trained for faster but fails to solve unseen/novel downstream tasks.
We have added these plots in the appendix of the revised version of the paper as well for further clarification. We also provide training curves for CEE-US (cf., Figure 3).
Thank you for your valuable feedback on our paper. We hope we addressed your concerns and if they're resolved, we kindly request you to consider revising our score upwards. We are happy to provide further clarification.
## Reference
[1] Simchowitz, Max, and Dylan Foster. "Naive exploration is optimal for online lqr." International Conference on Machine Learning. PMLR, 2020.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the response. My concern regarding W4 is addressed. However, I'm still confused why the proposed method has better zero-shot generalization ability. Can the authors provide insights beyond experimental results? I didn't seem to find the corresponding theoretical justifications. Besides, can the authors explain in more detail why OpAx performs "undirected exploration"? And the pseudocode also does not indicate the multi-task feature that the authors claim.
---
Reply to Comment 1.1.1:
Title: Response on comment
Comment: We are happy to provide more insights.
We are operating in a setting where the dynamics of the system are unknown.
*Standard model-based RL approaches such as H-UCRL*: Given a model estimate, standard model-based RL (MBRL) approaches, optimize for a fixed control task with a known reward function to obtain a policy. They execute this policy on the true system to gather data and update the learned model with the collected data. This process is repeated till satisfactory performance on the control task is achieved. Accordingly, they tend to learn the model well only in regions of the state action space where the rewards for the control task are high instead of learning a globally accurate model.
*OpAx*: For OpAx we consider the problem of active exploration, where the goal is to explore the dynamics globally. To this end, we use the model epistemic uncertainty as our reward (cf., Eq 6). The model epistemic uncertainty is high in regions where we have less data. Accordingly, at each episode, we plan a policy that drives the system to regions with high model epistemic uncertainty. We collect data in these regions and update our model. Since we only consider the epistemic uncertainty as a reward and not a specific control task (as for standard MBRL methods), our algorithm explores the dynamics globally. This is the objective of active exploration. In Theorem 1 and 2, we theoretically justify our algorithm and show that our method provably explores the whole state-action space. To the best of our knowledge, we are the first to give such guarantees.
When standard MBRL approaches are evaluated on the control task they see during training, they generally achieve better performance faster. This is because they focus on learning a model in regions where the rewards for the task are high. However, since these approaches learn in regions specific to the control-task, they do not perform well on new unseen tasks. In particular, when the unseen tasks have high rewards in different regions of state action space. On the contrary, OpAx explores the whole domain and therefore it performs better on new tasks, i.e., zero-shot generalization.
It is much appreciated if you could please reconsider your assessment, or respond with questions/suggestions so that we can improve the paper. | Rebuttal 1:
Rebuttal: We thank the reviewers for their feedback. It seems that some key contributions to our work have missed the attention of our reviewers. We have made our contributions more clear in the revised paper and included additional references. We highlight our contributions below for clarification.
1. We consider the problem of learning a dynamical system in a reward-free/task-agnostic manner over *continuous state and action spaces*. We adhere to the general textbook definition of a dynamical system, e.g. [1].
2. We introduce a practical exploration objective, based on the concept of information in Bayesian experiment design. Our *novel* derivation technique may be of *independent interest* in active learning and experiment design.
3. We utilize the principle of optimism in the face of uncertainty (OFU) and give a PAC bound on the epistemic uncertainty of any visited trajectory (Theorem 2).
4. To our knowledge, this paper is the first to present active learning guarantees on *continuous domains* and for *generic dynamical systems in an RKHS*. Prior work is limited to strictly simpler classes of systems such as finite, linear, and low-rank MDPs [2, 3, 4, 5], linear systems [6], and nonlinear systems with finite-dimensional feature spaces [7]. In fact, concurrent work on model-based RL [8, 9] explicitly highlights the *lack of theoretical guarantees* within the context of curious/active exploration of general dynamical systems. Our paper directly targets this gap.
5. We validate our method extensively over 6 RL tasks, to demonstrate that *in addition to enjoying strong theoretical guarantees*, OPAX performs on par with the state-of-the-art baselines, none of which are supported by theory.
We have revised our Related Works section and added additional experiments in the Appendix to demonstrate the benefits of active learning and optimism, as asked for by our reviewers. Furthermore, some of our reviewers raised concerns about the exponential dependence of our bound on the horizon $T$. In accordance with prior work [10], we believe stronger assumptions are needed to alleviate this. We had discussed this in Appendix B, where in Theorem 4 we give a tighter bound (polynomial in $T$) under additional assumptions on the noise. We have now moved the theorem to the main paper for more visibility.
We would be happy to further update the paper, if there are any remaining questions or feedback by the reviewers.
## References
[1] Khalil, H. K. (2015). Nonlinear control, volume 406. Pearson New York.
[2] Jin, C. et al (2020). Reward-free exploration for reinforcement learning. International Conference on Machine Learning.
[3] Tarbouriech, J. et al (2020). Active model estimation in markov decision processes. Conference on Uncertainty in Artificial Intelligence.
[4] Wagenmaker, Andrew J (2022). et al. Reward-free rl is no harder than reward-aware rl in linear markov decision processes.
[5] Chen, J. et al (2023). On the statistical efficiency of reward-free exploration in non-linear rl. Advances in Neural Information Processing Systems.
[6] Wagenmaker, A. et al (2020). Active learning for identification of linear dynamical
systems. In Conference on Learning Theory.
[7] Mania, H. et al (2022). Active learning for nonlinear system identification with guarantees. The Journal of Machine Learning Research.
[8] Chakraborty, S. et al (2023). STEERING: Stein Information Directed Exploration for Model-Based Reinforcement Learning. arXiv preprint arXiv:2301.12038.
[9] Wagenmaker, A. et al (2023). Optimal Exploration for Model-Based RL in Nonlinear Systems." arXiv preprint arXiv:2306.09210.
[10] Curi, S., Berkenkamp, F., and Krause, A. (2020). Efficient model-based reinforcement learning through optimistic policy search and planning. Advances in Neural Information Processing Systems.
Pdf: /pdf/05ee95938f18a9f3d690a3066f2b62e9601208f7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a task-agnostic active exploration algorithm for non-linear dynamic systems as long as it can be well calibrated. By combining the optimistic exploration principle and some standard baysian techinques, they give a general convergence bound as well as the more specfic bound in gaussian process case. Besides the theoretical proofs, they also provide several downstream experiments, showing their algorihthm can achieve better performance.
Strengths: 1. This paper gives the first theoretical gaurantees on general non-linear dynamic models, which is significant.
2. The bayesian-based framework is somewhat novel and can be extended in many cases.
3. Their theoretical analysis are solid. Their results on guassian process model helps reader to further understand this problem.
4. I am not familiar with control experiments, but according to what stated in their paper, their approach give nontrivial improvements.
Weaknesses: 1. There is no discussion on the computational complexity. For eqn.(7), it is unclear to me how to efficiently solve $\pi$, $\eta$ when the policy space is large or the $T$ is large.
2. The techinque itself is not very surprising to me. Exploration by optimism is a widely used in all the RL related paper, and the estimation on confidence bound seems to me just a standard derivation from exsiting techiniques.
3. There is no dicussion on the lower-bound, therefore it is hard for me to understand how good this result is.
But I am willing to raise my scores if my questions are addressed, or there are some techinical difficulty I am missing, or someone else want to advocate contribution of the experiment results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Main questions:
1. Can you give more dicussion on the computational complexity. Or maybe give me some examples of exsiting oracles?
2. Currently paper aims to upper bound the maximum expected information gain (or the epistemic uncertainty). While I believe it makes sense, it is hard for me to connect that with previous paper. For example, [Wagenmaker and Jamieson, 2020] gives the upper bound ob $||\hat{A} - A^*||$, which is very easy to understand. Can you show what this maximum expected information gain implies in those simple models so reader can have more intuition on your results?
3. Can you give some discussions on the lower bound?
4. Can you give more dicussions on Assumpton 2 (line 114-115) and Line 160 "Incorporating constraints Equation (7) can impose input constraints by considering a restrictive class of policies $\Pi$. " I understand this is one standard assumption. But theoretically, it is still unclear to me how difficult it is to lift this assumption in order to get the generalization result, and how it will affect the computational complexity.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback, our response follows.
## Weaknesses
*W1 Computational complexity*: Solving an optimization problem for general nonlinear systems is challenging, and out of scope for this work. For our problem formulation, standard trajectory optimizers such as iLQR[1], iCEM[2] or policy optimizers such as BPTT [3], and SAC [4] can be and have been commonly used. We also provide a heuristic variant of our algorithm in Appendix C.2.2., which, unlike the theoretical variant, does not scale with the dimension of the state space. This heuristic variant was used for the experiments in the “Fetch, Pick & Place Construction” environment. We have added this explanation in the updated version of the paper.
*W2 Novelty of technique*: As we highlight in the general summary above, the novelty of our method lies in leveraging ideas from Bayesian experiment design, and the OFU paradigm, to provide novel and first-of-its-kind analysis for active learning. We relate the regret of planning under unknown dynamics and the information gained over a whole trajectory, to give a novel sample complexity bound for active/curious exploration. To our knowledge, we are the first to do so. This lack of theoretical results is also acknowledged by concurrent works [6, 7].
*W3 There is no discussion on the lower bound*: Providing lower bounds is a challenging open research problem. However, in more restrictive classes of problems [7, 8, 9] there exist nearly minimax optimal lower bounds, which compared to the upper bound, only vary in constants and logarithmic terms (wrt to N). In our setting, we would expect a similar outcome, but we acknowledge that this is yet to be solved as future work.
## Additional Questions:
*Q1 Intuition of the upper bound*: We are happy to give some intuition. Mainly, for the general RKHS setting we give a bound on how fast the epistemic uncertainty decays (Theorem 2).
In Lemma 2, we show that with high probability for all $j \in \{1, \dots, d_x}$
$$
|\mu_{n, j}(z) - f_j^*(z)| \leq \beta_n(\delta) \sigma_{n, j}(z) \leq \mathcal{O}\left(\sqrt{\frac{\gamma^{T+1}_n}{n}}\right)
$$
Therefore, the decay rate of the epistemic uncertainty quantifies how fast our mean estimate converges to the true function. Our result holds for general RKHS kernels. Example for the linear kernel (with known matrix B): $f^*(z) = Ax + Bu; \mu_n(z) = \hat{A}x + Bu$.
Then our bound is of the following sort:
$$
||\mu_{n, j}(z) - f^*(z)||_1 = ||(\hat{A} - A)x||_1 \leq \mathcal{O}\left(\sqrt{\frac{d^{T} \log^T(n)}{n}}\right)
$$
for all $x$. Note, this bound holds for a more general setting than the one considered in [Wagenmaker and Jamieson, 2020]. Furthermore, under stronger assumptions on the noise (Gaussian, instead of sub-Gaussian), we can avoid the exponential dependence on $T$ (see Theorem 4 Appendix B for more detail).
*Q2 Constraints on inputs*: For input constraints like $||a|| \leq a_{\max}$, we can restrict our search to policies that only map to actions between $[-a_{\max}, a_{\max}]$. Most RL optimizers, such as SAC, squash actions to a fixed range.
Having addressed all of the questions provided by the reviewer, and given the contributions of this paper, we would appreciate it if the reviewer would increase their score for our paper. We would be happy to answer any remaining questions or concerns.
## References
[1] Li, Weiwei, and Emanuel Todorov (2004). Iterative linear quadratic regulator design for nonlinear biological movement systems. First International Conference on Informatics in Control, Automation and Robotics.
[2] Pinneri, C., et al (2021). Sample-efficient cross-entropy method for real-time planning. Conference on Robot Learning.
[3] Clavera, I. et al (2020). Model-augmented actor-critic: Backpropagating through paths. arXiv preprint arXiv:2005.08068 .
[4] Haarnoja, T., et al (2018). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. International conference on machine learning.
[5] Kakade, S., et al (2020). Information theoretic regret bounds for online nonlinear control. Advances in Neural Information Processing Systems.
[6] Chakraborty, S. et al (2023). STEERING: Stein Information Directed Exploration for Model-Based Reinforcement Learning. arXiv preprint arXiv:2301.12038.
[7] Wagenmaker, A. et al (2023). Optimal Exploration for Model-Based RL in Nonlinear Systems. arXiv preprint arXiv:2306.09210.
[8] Wagenmaker, A. et al (2020). Active learning for identification of linear dynamical systems. In Conference on Learning Theory.
[9] Scarlett, J., et al (2017). Lower bounds on regret for noisy gaussian process bandit optimization. Conference on Learning Theory.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarify on my additional question.
I agree this paper has some solid contributions, including a novel problem formulation, some sound proofs and corresponding experiment results as supplementary to their computional problem. Meanwhile, I still think 1\ the OFU techiniques are not novel enough 2\ Although there are many paper only consider sample complexity instead of computational complexity, but they all have substantial novelty on their techinques while this paper not.
Based on this, I raise my score to 5 because both pros and cons are obvious. It depends on AC to judge which shall weight more. If this paper get accepted, I will suggest add some explaination including the one mentioned in Q1 and the exponential horizon thing so readers can better understand this result in the overal literature.
---
Reply to Comment 1.1.1:
Title: Follow up on reviewer's comment
Comment: We thank you for increasing our score. Following the suggestion, we have added the additional explanation in the updated version of the paper.
As recognized, the paper has solid contributions including *being the first one* to give sample complexity of learning unknown dynamical systems with continuous state and action spaces. This is recognized as an open problem ([1, Paragraph 1 in the Introduction] [2, Paragraph 2 in the Related Work]).
The OFU technique has been used in several settings such as analyzing cumulative regret. However, it has not been used for model-based active exploration of dynamical systems with continuous state and action spaces.
Particularly, in our setting, the pure OFU technique is not enough to prove sample complexity. Our active exploration objective is integral to the theoretical analysis. The objective we propose is the *information gained over a trajectory/rollout*, which we simplify (to a practical intrinsic reward) in Lemma 1. *This has also not been done by prior work*. The combination of our objective and the OFU principle gives the SOTA sample complexity guarantees. Our theoretical analysis goes beyond just applying the OFU principle and the proving technique is of independent interest for the active exploration community. Accordingly, we believe that our technique has substantial novelty beyond just the OFU principle.
Given your second comment, we are curious about which papers you are referring to. Since, as we highlight above, we (as well as [1, 2]) are not aware of any sample complexity bounds for our general case.
We hope to have addressed your concerns regarding our contribution and would appreciate a reevaluation of our work.
[1] Chakraborty, S. et al (2023). STEERING: Stein Information Directed Exploration for Model-Based Reinforcement Learning. arXiv preprint arXiv:2301.12038.
[2] Wagenmaker, A. et al (2023). Optimal Exploration for Model-Based RL in Nonlinear Systems." arXiv preprint arXiv:2306.09210. | Summary: This paper proposes and studies a rather intuitive algorithm for active learning in nonlinear dynamical systems in the episodic setting. They establish consistency (in terms of mutual information) and provide supporting numerical experiments.
Strengths: * The proposed algorithm is intuitive and it is satisfying that it "works" (in terms of consistency).
* The paper is generally quite well-written and I did not have (m)any issues in terms of clarity and level of writing.
* The exact setting is relatively novel and well-motivated (some caveats below) and the question is interesting and definitely deserves further study.
* The experiments look relatively thorough (but I am not the best person to judge their significance).
Weaknesses:
* While I am sympathetic to the fact it can be hard to keep track of everything appearing in ML conferences, unfortunately the manuscript does miss some of the most closely related recent references especially when it comes to learning in nonlinear dynamical systems or general mixing processes. My concern here is that missing these makes the contribution of the present paper appear larger than it actually is, since the below references [A,B,C,D] also treat learning in rather general dynamical systems/time-series. In this light, the last part of the stated contributions (cf. line 52) are somewhat misleading as it states that related work is not general enough to treat the present system dynamics. However, [A,B,C,D] are actually general enough (or at least very close to general enough) and the authors might want to complement their references listed starting from line 50 (which certainly aren't the most general setups considered in the recent literature). In particular [B,C] also explicitly study RKHS-like dynamical systems.
[A] Roy, Abhishek, Krishnakumar Balasubramanian, and Murat A. Erdogdu. "On empirical risk minimization with dependent and heavy-tailed data." Advances in Neural Information Processing Systems 34 (2021): 8913-8926.
[B] Ziemann, Ingvar M., Henrik Sandberg, and Nikolai Matni. "Single trajectory nonparametric learning of nonlinear dynamics." conference on Learning Theory. PMLR, 2022.
[C] Ziemann, Ingvar, and Stephen Tu. "Learning with little mixing." Advances in Neural Information Processing Systems, 2022.
[D] Li, Yingcong, et al. "Transformers as algorithms: Generalization and stability in in-context learning." International Conference on Machine Learning. 2023.
* As the above references do not treat the active learning setting, this limitation can easily be overcome by including the above references and making the qualifying distinction that the contributions are not "first in system identification/supervised learning of nonlinear systems" but rather first in terms of experiment design/active learning.
* The exponential dependence on the horizon in the main results appears overly pessimistic to me and I question whether the the bound is informative at all in any meaningful setting. While convergence is not guaranteed in terms of MI the above references achieve convergence even from a single trajectory, at least indicating at a glance that such a dependence ought to be removable. This exponential dependence on the horizon in the provided bounds is a major caveat here---I have therefore estimated the theoretical results to be asymptotic consistency results and not bona fide finite sample guarantees.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Could the authors comment on the necessity of the exponential dependence on the horizon in their main result? It would be interesting to see if this indeed necessary or whether it can be overcome by an improved proof method/algo.
* Is the dependence on $\delta$ hidden in the main results (and if so what is it)? Seems a little strange to me as the result is claimed to hold wp at least $1-\delta$.
* Are there any interesting cases where $ L \times \max \beta \leq 1$ or is the exponential dependence always present?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and for providing additional references.
As the reviewer highlighted, the shared references do not consider the challenging active learning/unsupervised learning setting and focus more on the supervised learning problem. We have included the references in the revised paper as suggested
## Weaknesses
*W1. Exponential dependency on horizon T in regret bound*: The exponential dependency may be removed, however, in accordance to prior work, we believe it will require further assumptions (cf. [1] last paragraph, page 7). For instance, restricting the noise to be sampled from a Gaussian distribution (as opposed to sub-Gaussian) gives a $\mathcal{O}(\beta_N\sqrt{T\frac{\gamma_N}{N}})$ bound in the RKHS setting. This analysis was presented in Appendix B, Theorem 4. In the revised paper, we moved this result to the main text.
*W2. Comparison to the single trajectory setting*: The single trajectory works mostly deal with linear systems that are stable or assume knowledge of a stabilizing controller [2]. We make no such assumptions. Works that first learn a stabilizing controller from the data, also suffer from an exponential growth in the cost until a controller is learned [3]. Our general setting will require stronger assumptions to avoid the exponential dependence (see response W1.). We also emphasize that we give a worst-case theoretical upper bound. In practice, the epistemic uncertainty decreases much faster as depicted in Figure 1 in the paper.
## Additional Questions
*Q1 Dependence on $\delta$*: The calibration factor $\beta_n$ depends on $\delta$ (see definition 2). For the RKHS case, the dependence of $\beta_n$ on $\delta$ is well studied ( $\beta_n \propto \sqrt{\log(1/\delta)}$) [e.g. in 4].
Having addressed your concerns we kindly ask you to consider revising our score. For any remaining questions, we are happy to provide further clarification.
## References
[1] Curi, S., Berkenkamp, F., and Krause, A. (2020). Efficient model-based reinforcement learning through optimistic policy search and planning. Advances in Neural Information Processing Systems.
[2] Simchowitz, M. and Foster, D. (2020). Naive exploration is optimal for online lqr. In International Conference on Machine Learning, pages 8937–8948. PMLR.
[3] Chen, Xinyi, and Elad Hazan. "Black-box control for linear dynamical systems." In Conference on Learning Theory, pp. 1114-1143. PMLR, 2021.
[4] Chowdhury, S.R., et al (2017). On kernelized multi-armed bandits. International Conference on Machine Learning.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and am happy to raise my score to a 6.
In general I think bounds with exponential time dependencies are prohibitive so I am glad that the authors showcase their result removing this into the main body---even though it appears restricted to the Gaussian setting. Although I have not verified it sounds a little to me like this exponential dependence is driven by a lack of hypercontractivity/log-concavity for general subgaussians. I'd be curious to hear the authors' thoughts on this/what drives this dependence.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for increasing our score.
Indeed, this could be an explanation. Moreover, together with the dynamics and noise we induce a stochastic process that represents the evolution of the dynamical system. This process does not satisfy any contraction properties, i.e., in general, the system may be unstable.
When analyzing our bound; a key quantity that we study is how the two trajectories (optimistic and true) evolve over a horizon of $T$. In general, this can diverge exponentially with horizon $T$. However, for the special case of Gaussian noise due to its support on the whole domain and because of the boundedness of our exploration objective (epistemic uncertainty is always upper-bounded) we can perform a tighter analysis similar to [1].
[1] Kakade, S., et al (2020). Information theoretic regret bounds for online nonlinear control. Advances in Neural Information Processing Systems.
We are happy to answer further questions. | null | null | null | null |
Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts | Accept (poster) | Summary: This paper studies reasoning shortcuts (RS) in neuro-symbolic learning. RS refers to that the neural network does not learn a generalizable concept. This work provides a systematic characterization of RS, which not only gives a formal definition of RS, but also identifies key conditions of its occurrence. Several mitigation strategies are empirically or theoretically analyzed.
Strengths: - The paper is well-motivated and well-written. I particularly appreciate the illustrative example in Figure 1.
- The definition of RS is derived from causal representation learning, and it is quite reasonable.
- Some promising theoretical results are presented, with proofs provided in the appendix.
Weaknesses: - The paper focuses solely on probabilistic logic approaches like DPL, which limits its scope to a relatively narrow scope of neuro-symbolic learning.
- This paper introduces too many concepts that may be unnecessary, such as NeSy predictors, unintended semantics, concept extractors, concept distributions, deterministic distributions, etc. Therefore, it may not be easily understood by readers who are unfamiliar with this subfield.
- Some more experiments on other baseline methods are needed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The discussed neuro-symbolic learning is essentially in a weakly supervised learning setting (correct me if I'm wrong) [1]. Therefore, can the RS and its analysis be extended to general supervised learning problems?
- From my understanding, using Shannon entropy loss is critical to avoid shortcuts. As mentioned in the last paragraph of section 5.4, entropy loss conflicts with NeSy prediction, but [1] proposes using an annealing strategy to gradually remove this entropy regularization. Furthermore, there are quite a few classic methods using maximal entropy-like regularization, e.g., mixup [2], label smoothing [3], focal loss [4], energy-based models [5], and Logit normalization [6]. I strongly recommend separating this part as an additional subsection to reveal this clue for alleviating RS.
[1] Qing Li, Siyuan Huang, Yining Hong, Yixin Chen, Ying Nian Wu, Song-Chun Zhu. Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning.
[1] Zenan Li, Yuan Yao, Taolue Chen, Jingwei Xu, Chun Cao, Xiaoxing Ma, Jian Lu. Softened Symbol Grounding for Neuro-symbolic Systems.
[2] Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, Jean-Philippe Vert. On Mixup Regularization.
[3] Rafael Müller, Simon Kornblith, Geoffrey Hinton. When Does Label Smoothing Help?
[4] Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip Torr, Puneet Dokania. Calibrating Deep Neural Networks using Focal Loss.
[5] Connor Pryor, Charles Dickens, Eriq Augustine, Alon Albalak, William Wang, Lise Getoor. NeuPSL: Neural Probabilistic Soft Logic.
[6] Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, Yixuan Li. Mitigating Neural Network Overconfidence with Logit Normalization.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No, the limitations of the method were not discussed. The work does not have negative social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments about our work, in particular for finding it well-motivated and well-written. Below, we address the reviewer’s concerns.
**Evaluated approaches are not exhaustive**
We study reasoning shortcuts (RSs) in NeSy problems where knowledge is explicit and provided upfront. This setting is very common in NeSy AI, as shown by the list of references we provided in the ``NeSy predictors’’ paragraph (other references were cut due to space constraints). Within this setting, DPL, SL, and LTN are very representative. DPL and SL are based on probabilistic logic: DPL introduces a specific layer on top of the concepts to perform logic combination, while SL uses a regularization loss to integrate logic. LTN is a representative of fuzzy logic, whereby it evaluates the satisfiability of the logical formula via fuzzy relaxation of the operators.
We completely agree that RSs in settings where the knowledge is learned are equally interesting, and we do plan to look into them in a follow-up work.
While we do conjecture RSs to be a widespread phenomenon, as we noted in the Related Work section (page 8), NeSy approaches are far too heterogeneous to possibly experiment with them all in any given paper. We plan on extending our evaluation in a longer version of the paper.
**Notation heavy:**
We agree that the paper introduces several concepts, however, they are necessary to properly formalize our learning problem, even more so given this paper aims to lay the theoretical foundations of reasoning shortcuts.
**Supervised vs weakly supervised**
We agree that our learning setup can be seen as a weakly-supervised setting according to the definition in [1]. That is (latent) concepts $C$ are learned with supervision on labels $Y$ and leveraging knowledge $K$.
We are not sure what the reviewer means by general supervised learning. Supervising also the concepts $C$ can be seen as a very effective mitigation strategy (concept supervision in the text) for which the count is reduced by the constraint in Prop. 5.
On the contrary, if the reviewer means regular neural classifiers in which the label $Y$ is predicted from the input $X$ without computing concepts $C$ explicitly, our results can be adapted when knowledge $K$ is used among labels. In fact, in this case, SL can be adapted to regularize the predictions $Y$ with the known logical constraints. If some of the labels are latent, reasoning shortcuts can appear, though, in a different form. We believe this constitutes an interesting extension of our present work.
**Shannon Entropy**
Thank you for the references.
These works focus on increasing the entropy of $p(Y | X)$. This is insufficient to prevent reasoning shortcuts, which affect $p(C | X)$. While it is possible to adapt these strategies - for instance, label smoothing - to $p(C | X)$, doing so might conflict with the objective of recovering the ground-truth concept distributions, that is, the ``correct semantics’’. This is because, in general, there is no guarantee the ground-truth concepts themselves are high-entropy. We will cite the provided works and discuss this point in our paper. We will also convert the paragraph to a subsection as you suggested.
Please note that we did experiment with a strategy encouraging high entropy on $p(C)$, taken from [Manhaeve et al., 2021] and denoted +H in our tables. As noted in the ``Other heuristics’’ paragraph on page 7, Shannon Entropy can be useful in practice to regularize the network’s prediction, but its learning objective can conflict with the original learning loss (e.g. maximum log-likelihood for DPL). However, doing so means that it is no longer possible to explicitly count the number of deterministic RSs as in Eq. (6). Notice that imposing a Shannon entropy on the concepts $C$ is equivalent to minimizing the Jensen-Shannon divergence between the concept distribution $p(C)$ and the uniform distribution. Since the data is often not distributed uniformly, optimizing the JS divergence conflicts with recovering the true concept distribution (especially if there are few concepts, like in our experiment Q2 bottom). In practice, the +H strategy can help to prevent some RSs, but it is insufficient to avoid all of them, as shown in Table 3, top.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thank you for clarifying my concerns. I would like to further discuss the Shannon entropy regularization. I agree that using a Shannon entropy with a constant coefficient may cause conflicts. However, this issue can be properly addressed by applying an adaptive strategy, such as focal loss [4] or annealing strategy [1].
In general, this is a good paper, I have raised my score to 7. | Summary: This paper proposes mitigation and evaluation strategies for "reasoning shortcuts" in neuro-symbolic reasoning models, roughly corresponding to when the concepts (i.e., latent variables) extracted do not match the "true" ground-truth factors. The paper then examines reasoning shortcuts on several proposed, simple datasets, including XOR, MNIST-Addition, MNIST-EvenOdd, MNIST-AddMul, and BDD-OIA.
Strengths: The topic is important and underexplored, while the approach taken in this paper is novel. In general, the presentation is also excellent. The experiments, while fairly simple, are appropriate given the topic. The work is highly related to and extends prior work on disentanglement in machine learning, which is referenced (though not especially explicitly discussed). It is also decently well-motivated and some of the tasks like the BDD dataset highlighted interesting failure cases.
Weaknesses: Fundamentally, this paper is subject to many of the same challenges as prior work on disentanglement. That is, the task of identifying the "correct" latent variables is the task of disentanglement. While Locatello et al. (2019)'s "Challenging common assumptions" is cited for the definition of disentanglement (as [44]), the key lesson is never explicitly referenced (at least, not attributed): without supervision or inductive biases on the model and data, disentanglement is impossible. However, the authors highlight a similar point empirically, that "Disentanglement is not enough under selection bias." Of course, this is not a particularly high bar, and indeed, the strategies taken to mitigate reasoning shortcuts in this paper do so with varying degrees of explicitness. For example, the paper's "data-based mitigation" explicitly provides supervision using a subset of the latent variables. However, in their objective-based approach, which draws inspiration from autoencoders, they could have plausibly also built on disentanglement-focused literature like InfoGAN (Chen et al. 2016) or $\beta$-TCVAE (Chen et al. 2018). However, with that being said, it approaches many of these questions from a refreshing new perspective.
This leads to the second concern - to some extent, many of the proposed mitigation strategies are standard in the disentanglement literature. For example, "On the relationship between disentanglement and multi-task learning" (Maziarka et al. 2021) highlights the role of training on multiple tasks for learning disentangled representations -- in this paper, that is seemingly equivalent to knowledge-based mitigation. On the other hand, the data-based mitigation seemingly corresponds to the approach taken in "Semi-Supervised StyleGAN for Disentanglement Learning" (Nie et al. 2020). The objective-based mitigation (i.e., reconstruction loss) is, of course, central to many representation learning approaches. Then in the discussion of "architecture-based mitigation," the connection to disentanglement work is made explicit. However, approaching these questions from the perspective of reasoning is fairly novel and is an important connection.
Ultimately, while the problem itself is well-motivated and novel, and the paper itself is excellent, and I believe it should likely be accepted based on the analysis alone, my primary reservation stems from the question of how reasoning, in particular, has shaped the mitigation strategies taken in this paper, beyond the datasets considered.
---- Post-rebuttal update ----
Most of my concerns have been addressed - I've raised my score to a 7.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: What differences would you highlight between the reasoning-focused tasks in this paper and the more traditional tasks in the disentanglement literature? How are the approaches discussed in this paper particularly well-suited for those tasks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: I think some more explicit discussion of the theoretical limitations demonstrated by related works (e.g., Locatello et al. 2019) would have been useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments about our work, in particular for finding it excellent, novel, and well-presented. Below, we address the reviewer’s concerns.
**Disentanglement vs. RSs**
Thank you for bringing up this interesting connection. There is a strong link between RSs and the problem of identifiability of discrete latent variables [1]. Identifiability is however different from (and stronger than) disentanglement.
Notice also that, whereas most works on identifiability seek (im)possibility conditions, we go one step further and provide a count of the number of Reasoning Shortcuts, i.e., ways in which the model can fail to identify the right concepts.
Hence, we are not addressing conditions to achieve identifiability but, rather, we describe the causes for non-identifiability.
As for specific differences, notice that research on disentanglement focuses on continuous latent factors (e.g., in non-linear independent component analysis and disentangled representation learning via VAEs), does not consider symbolic knowledge, and typically allows factors to be permuted. In contrast, in our work, we deal with discrete concepts learned from prior knowledge and do not allow permutations of factors.
We completely agree that techniques from causal representation learning provide useful tools for understanding neuro-symbolic integration, and we hope our work serves as a starting point for this research direction.
**Strategies are not novel**
Our paper aims to bring the issue of RSs - and their impact on trustworthiness - to the spotlight and to understand the root causes thereof.
As mentioned in the abstract, we study a set of **natural** mitigation strategies, which however only become obvious given our identified root causes. We do not claim these are novel or technically advanced. Rather, our contribution is making the impact they have on the number of reasoning shortcuts explicit (cf. Table 1).
We are aware that (weak) supervision has been used to encourage disentanglement. The work of Maziarka et al. is however new to us; we will reference it in the MTL section, thank you for the pointer. However, these works seek disentanglement, while we are concerned with a stronger property, namely identifiability.
Notice also that to estimate the effect of disentanglement it does not require to specify how it is achieved. In fact, disentanglement on XOR and MNIST datasets can be enforced architecturally and tested empirically. In sections Q1 and Q2, we empirically investigate this point. Regardless, the approaches indicated by the reviewer may affect different root causes and could in principle lead to a reduction of RSs. We will include the suggested works in the discussion on how to achieve disentanglement in practice in Section 5.4.
[1] Hyvärinen et al. "Identifiability of latent-variable and structural-equation models: from linear to nonlinear." arXiv:2302.02672 (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks for the response!
> **Disentanglement vs. RSs**
> As for specific differences, notice that research on disentanglement focuses on continuous latent factors (e.g., in non-linear independent component analysis and disentangled representation learning via VAEs), does not consider symbolic knowledge, and typically allows factors to be permuted. In contrast, in our work, we deal with discrete concepts learned from prior knowledge and do not allow permutations of factors.
I do not believe this fully matches the use of disentanglement in the literature. While discrete, ordered variables have not been the primary focus of disentanglement literature (arguably, out of convenience), both aspects have been explored. The question of how to learn discrete latent factors has attracted a decent amount of interest in several contexts, and one can find dozens of papers on this topic [1-4, and many others]. Moreover, most (probably all standard) disentanglement methods that provide supervision do not allow arbitrary permutation of variables (e.g. [5] mentioned in the original review and 6]).
> **Strategies are not novel**
As noted in the original review, I believe the framing and specific motivation in this work are novel, and in general, am positive about this work. However, given that there is a body of prior work which aims to implement similar techniques to these mitigation strategies, I believe it is worth highlighting these connections more explicitly.
Refs:
1. Learning Disentangled Joint Continuous and Discrete Representations, Dupont 2018
2. Disentangling generative factors in natural language with discrete variational autoencoders, Mercatali and Freitas 2021
3. Structured Disentangled Representations, Esmaeili et al 2019
4. Learning Disentangled Discrete Representations, Friede et al 2023
5. Semi-Supervised StyleGAN for Disentanglement Learning, Nie et al 2020
6. Weakly-Supervised Disentanglement Without Compromises, Locatello et al 2020
---
Reply to Comment 1.1.1:
Title: Reply by Authors
Comment: Thank you for the quick reply and your effort in engaging with us!
In light of this discussion, **we agree related work on disentanglement should be covered more thoroughly in the main text, and we plan to do so in the Related Work section**. Please let us know if you’d prefer we address this differently.
**On disentanglement.** We stand corrected, you are right that there exists also work on disentanglement of discrete variables.
We acknowledge that there exist different definitions of disentanglement in the literature. The reason why we distinguish between disentanglement and identifiability is based on our reading of [Suter et al., Reddy et al., Hyvarinen et al.]. Our view is that factors are disentangled iff they can be manipulated (through do-operations) independently from each other [Suter et al., Reddy et al.], while identifiability amounts to finding the "correct factors’’ [Hyvarinen et al.], regardless of whether are disentangled or not. There are also weaker notions of identifiability (such as "weak identifiability’’ [Hyvarinen and Morioka] and "alignment’’ [Marconato et al.]) that lie in-between disentanglement and identifiability. Note also that if the ground-truth factors are correlated to one another, and the learned concepts correctly identify them, then these cannot be disentangled (according to [Suter et al., Reddy et al.]). Our analysis also covers this case, in that it makes no assumptions on the G’s being disentangled. This is why we are keen on keeping this distinction.
[Suter et al.] Suter, Miladinovic, Schoelkopf, Bauer. “Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness.” ICML 2019.
[Reddy et al.] Reddy, Balasubramanian, Vineeth, and others. “On causally disentangled representations”. AAAI 2022.
[Hyvarinen and Morioka]. "Nonlinear ICA of temporally dependent stationary sources." Artificial Intelligence and Statistics. PMLR, 2017.
[Hyvärinen et al.] "Identifiability of latent-variable and structural-equation models: from linear to nonlinear." arXiv:2302.02672 (2023).
[Marconato et al.] “Glancenets: Interpretable, Leak-proof, Concept-based Models”, NeurIPS 2022.
**Additional pointers to strategies for disentanglement.** We now see better how to address this: we plan to cite relevant prior work on approaches in line with the mitigation strategies we identified in the main text. We never intended to dismiss these works. | Summary: _Background_: The authors are interested in studying the properties of sequential, two-stage neuro-symbolic pipelines. Stage 1 is a neural network that reduces a high dimensional input (eg: an MNIST image) to a low-dimensional relaxation of a symbolic space (eg: 10 dimensional onehot vector) and Stage 2 is a probabilistic function (neural network or otherwise) that predicts the labels given the symbols utilizing a provided knowledge base. However, when trained end-to-end, the symbolic representation often collapses and the symbols learned are not equivalent to the ground truth symbols. This is called a reasoning shortcut (RS).
This work focuses on investigating the causes of such reasoning shortcuts and presenting suitable remedies for the same. The authors formally define reasoning shortcuts for NeSy architectures as symbolic mappings which allow the model to achieve a locally optimal loss, but whose semantics do not match the ground truth semantics.
The authors identify four factors to mitigate the occurrence of reasoning shortcuts: 1) additional prior knowledge, 2) additional data, 3) additional reconstruction targets in the objective function, and 4) additional inductive bias in model architecture design. They study the impact of these reasoning shortcuts mitigations on five NeSy prediction tasks with three NeSy algorithms.
The authors find that different combinations of mitigation strategies helps improve performance in different cases.
Strengths: **Originality:**
- While the concept of reasoning shortcuts was previously known, this paper is the first to offer a formal definition and analysis of reasoning shortcuts as a limitation of instantiations of neurosymbolic models.
- The mitigation strategies proposed are widely used in the neurosymbolic community, but they haven’t been connected before under the umbrella definition of reasoning shortcut mitigations. Hence, this paper brings a fresh perspective to why such strategies work well for NeSy systems.
**Quality and Clarity**
- The setup, definition, and analysis of reasoning shortcuts is well motivated and well explained.
**Significance**
- The paper identifies a significant problem with previous work and provides a formal definition of the problem and mitigation strategies to address the problem.
- I’m intrigued by the analysis of counting reasoning shortcuts because it presents a method to quantify how well a knowledge base describes a dataset / data generation process.
Weaknesses: **Clarity:**
- Table 1: DIS is mentioned in the caption, but no row for it exists in the table.
- Table 2: It took me a long time to figure out that each number is the frequency of RSs for a baseline on a dataset. Please explain what the rows, columns, and values mean for each table.
- Line 276: “when forcing disentanglement…” Disentanglement is a property of the architecture. How is the architecture modified to enforce disentanglement? The appendix doesn't mention this.
- “Roughly speaking, a concept F1 below 95% typically indicates an RS” I’ll defer to the author's judgement here because I’m not familiar with the data, but to me, a low concept F1 *on its own* signifies that the symbol extraction hasn’t been successful. Did the authors perhaps mean a low concept F1 but a high prediction F1 hints at the existence of RS's?
- Table 4 results: I'm not sure that I understand the results here. I defer to the authors, but I think this table and task demands extra scrutiny.
- The prediction accuracy and the concept accuracy both seem low (< 80% as a ballpark figure). Is this close to the state-of-the-art performance for this dataset?
- "concepts tend to align much closer to the diagonal..." I don't quite understand how being closer to the diagonal implies better concept generalization. The concepts aren't ordered, so an incorrect prediction, regardless of perceived distance from the correct concept, should be incorrect. Maybe I'm misunderstanding this sentence.
**Significance:**
- The method only compares against ablations of the same model. I'm concerned that this paints an incomplete picture about the relative performance of NeSy predictors compared to other models on these tasks. This lowers the significance of this architecture for the neurosymbolic community. I'd advise the authors to look into other fully neural and neurosymbolic methods that might have similar problem statements, and/or provide reasons for why the baselines are not suitable for the datasets. I've provided a couple of works I'm familiar with that are related (but not exactly equivalent) to the base algorithm studied in this paper.
* ROAP[1]: ROAP presents an end to end algorithm for symbol grounding and symbol manipulation when external knowledge is not available, but we have diversity of tasks to instantiate a multi-task learning objective. I am reminded of ROAP because one of their tasks is similar to the MNIST digit grounding tasks presented here.
* Concept Embedding Bottleneck models[2]: Concept embedding bottleneck models instantiate an extra layer that creates a concept embedding that's used for downstream prediction. I am reminded of CEM models because their algorithm is geared towards leveraging explainable symbolic information in real world tasks without loosing accuracy.
* I'd also be interested in whether the authors think these models use the mitigation strategies presented for reasoning shortcuts.
Overall, I'm giving this paper a __weak accept__. The authors formalize an interesting problem which is definitely interesting to the broader neurosymbolic community. However, the related work / evaluation focuses on a very specific definition of neurosymbolic models.
[1] https://arxiv.org/abs/2206.05922
[2] https://arxiv.org/abs/2209.09056
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses section
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed potential limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their interest in our paper and for finding it clear and significant. Below we address the points raised by the review.
**Typos and clarifications:**
Thank you for pointing these out, we will update Tables 1 and 2 accordingly.
As indicated in the main text, our architectural implementation of disentanglement is described in Appendix C - and specifically in Section C.4. We will clarify this.
The concept confusion matrices illustrate the mapping between ground-truth and learned concepts. Specifically, rows/columns each represent a concept vector c (or g) and are sorted lexicographically (i.e., the first row is the vector (0, 0, 0), the second row is (0, 0, 1), etc.) A model unaffected by RS would have c = g, so its confusion matrix would be the identity. The more diagonal the confusion matrix, the better the semantics acquired by the model. We will clarify this in the text.
**The 95 \% threshold:**
We chose a threshold of 95% on the F1-score as a proxy for non-RS solutions because the best-performing approach, DPL+H+C, achieves F1(Y) around 91% and F1(C) around 88% (see results in Table 3, top) while failing to properly recognize the digit 7 (as shown by the confusion matrix in Fig. 10, Appendix C.7). More generally, high F1(Y) and sub-par F1(C) indicate a model affected by a RS (as suggested by Eq. 4.)
**BDD-OIA performance:**
In BDD-OIA, the SotA accuracy performances are obtained with the CBM-AUC [3] model.
This method is also below 80% F1 for labels and concepts: CBM-AUC obtains F1-mean (Y) of 70.8 $\pm$ 0.1 % and F1-mean (C) of 62.1 $\pm$ 0.1%. We will include these results in the Table of results.
**Comparison with other NeSy predictors**
Given how broad and heterogeneous NeSy is, we had to focus on a selection of representative architectures: DPL is a prototypical probabilistic logic approach based on a reasoning layer; SL represents penalty-based probabilistic logic models; LTN represents penalties based on fuzzy logics (see the “NeSy Predictors” paragraph on page 3).
Thank you for pointing out [1] and [2]. Notice that both architectures comply with the structure of NeSy predictors shown in Fig. 2, as they extract concepts C and process them to yield a prediction Y, and as such fall within the larger scope of our results. The main differences are:
* ROAP learns knowledge during training. From our perspective, this is not a substantial difference: even if the learned knowledge resembles the ground-truth knowledge, the model is still subject to the same RSs that the latter entails. Worse, since the knowledge is not fixed, the model is less likely to acquire concepts with the right semantics. On the bright side, since ROAP learns multiple tasks, it naturally leverages MTL mitigation. We will mention this as an example usage of MTL in the knowledge mitigation section.
* CEM learns both concepts and a (linear) prediction layer mapping concepts to labels. Unlike NeSy architectures, CEMs cannot leverage prior knowledge and (like other concept-based models) require full concept supervision, cf. Section 2.
We fully agree that investigating RSs for neural architectures, concept-based models, and NeSy models with learned knowledge is a critical direction of research, and we plan to pursue it in future work. We will mention this in the conclusion.
[3] Sawada and Keigo "Concept bottleneck model with additional unsupervised concepts." IEEE Access 10 (2022)
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications! I'm keeping the score where it is right now. | Summary: The paper presents a in depth analysis of reasoning shortcuts and the impact they have of the learning process of neuro-symbolic learners.
On the ground of the performed analysis, they propose 4 different mitigation strategies to alleviate the problem:
1. Knowledge-based mitigation
2. Data-based mitigation
3. Objective-based mitigation
4. Architecture based mitigation
Strengths: The paper studies an important problem and it is very well presented. It is easy to follow and the formalisation of the concept of reasoning shortcuts was needed.
The experimental analysis supports the claims.
Weaknesses: I have only couple of perplexities regarding the first two mitigation strategies:
1. in the knowledge-based mitigation, the authors claim that depending on the application, collecting new knowledge might not be feasible. They then proceed in proposing Multi-task learning as a practical alternative. Here it is were I get confused:
- how can this be more practical? In addition to collecting more knowledge you now need to collect additional labels.
- if I understand correctly, in proposition 4 in the end the conclusion is anyway that the deterministic optimum $p_{\theta}(\mathbf{C} \mid \mathbf{G})$ of the MTL loss is a deterministic optimum of a single task whose prior knowledge is the conjunction of all the prior knowledge used in the different MTL tasks. How is it then better defining different MTL tasks than collecting more knowledge?
2. In section 5.2 the authors say that RSs can occur also when the dataset is exhaustive. This to me seems more of a problem related to not choosing the right concepts, for the task, i.e., there is not a bijective correspondence between the concepts in $\mathbf{C}$ and $\mathbf{G}$. For example, suppose we have the (very simple) task of distinguishing bird images from non-bird images, and suppose we have one concept encoding the classes "pigeon, flamingo, duck" (and we have the simple background knowledge that any of these classes implies "bird"). In this case, one mitigation strategy could be to add more supervisions on the concept (as proposed), but wouldn't it be better to actually take concepts that are "meaningful" for the task?
On the line of the last point above, I was very surprised from the amount of reasoning shortcuts learnt in the MNIST-addition task in table 2. Can you provide some examples for such RSs? Also, these results are taken on *optimal* runs. What would be the results on 30 *standard runs*? Also, can you provide some results in terms of accuracy on both the concepts and the final outcome when using disentanglement and when not?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments about our work, in particular for finding it important, needed, and well-presented. Below, we address their concerns.
**Is MTL more practical than further constraining the prior knowledge?**
We agree that, compared to constraining the knowledge only, multi-task learning (MTL) is more expensive in that it requires gathering both extra knowledge and extra labels. However, MTL is a sensible choice whenever the first option is infeasible, as we noted in Section 5.1. It is in this sense that MTL is more practical.
To see this, consider the MNIST-Addition experiment (Table 3, bottom). Here, the knowledge specifies how to perform addition and no extra constraints can be added without changing the semantics of the task. This can be side-stepped by pairing the latter to a multiplication task, and doing so completely avoids RSs.
Section 5.1 currently reads “experts may not be available, or it may be impossible to constrain K without also eliminating concepts with the intended semantics”. We agree this is confusing and will change it to “experts may not be available, or it may be impossible to further specialize the knowledge K without altering the semantics of the prediction task”.
**Do Reasoning Shortcuts appear when changing the underlying concepts?**
We fully agree that the structure of the knowledge K and the concepts appearing therein determine whether, and how many, RSs exist. This follows from Theorem 2.
Manipulating K and C is one way of reducing the number of RSs. One option is to replace low-level concepts with higher-level variants. For instance, in BDD-OIA one could compact all obstacle concepts (e.g., “pedestrian”) into a *single* “obstacle” concept. This leads to a sensible reduction of RSs, which can be evaluated using Eq. (6).$*$
The main issue is that doing so is not always possible. In the XOR problem, for instance, we cannot lump together any concepts without changing the semantics of the problem.
Identifying cases in which concept manipulation can work, and automating said procedure, is an interesting research question worth investigating in future work.
$*$ This case still obeys Theorem 2 if we replace C, G, and K with their “coarser” variants.
**Extended results for Table 2**
**Example of RSs for MNIST-Addition:** In section C.2.2, we described the RSs affecting the MNIST-Addition task with all digits. We will clarify this in the main text. The idea is as follows: when both digits are processed together (no disentanglement) the model can map the two digits to whatever combination of concepts gives the correct sum. As an example, all the sums $G_1 + G_2 = 0 + 2 = 1 + 1 = 2 + 0 = 2$ can be mapped to the combinations $C_1=1$ and $C_2=1$.
**Optimal vs. standard runs:** The reason why we considered 30 optimal runs is because reasoning shortcuts are defined as models that achieve (near) optimal performance by leveraging unintended concepts. The performance of standard models is not representative of this definition.
Regardless, we report the label and concept accuracies of 30 such models in the rebuttal PDF for DPL, SL, and LTN on both MNIST-Addition and XOR, with and without disentanglement. All models fare much better at prediction labels rather than concepts, as expected, indicating that even “non optimal” models suffer from RSs. These results are compatible with those reported in the main text.
*More in detail*: without disentanglement (-), all models achieve high label accuracy (Acc$(Y)$) by leveraging poor quality concepts, as shown by the low concept accuracy (Acc$(C)$). By imposing disentanglement (DIS), the concept accuracy generally improves, indicating that models get closer to the intended semantics.
On the XOR data set, some of the models learned do not reach optimality, regardless of architecture (DPL, SL, LTN). This occurs because they sometimes remain stuck in bad local minima, with and without disentanglement. This is reflected by the large variance observed in the XOR results. Conversely, on MNISTAddition, we obtained stable runs, yielding most of the time an optimal behavior. The only exception here is LTN without disentanglement, for which we observed more variability.
---
Rebuttal Comment 1.1:
Comment: All my questions have been answered. I'll keep my score at 7. | Rebuttal 1:
Rebuttal: We are grateful to all reviewers for taking the time to evaluate our paper and appreciating the motivation (**p1H9**), the theoretical analysis (**Qpe9**), and the significance of our work (**yCW5**, **szNE**, **Qpe9**), as well as the quality of the presentation (**yCW5**, **szNE**, **Qpe9**, **p1H9**).
We will address the points they raised in the detailed replies.
We also attach to this comment a PDF containing additional empirical results for **Q1**.
Pdf: /pdf/b6d73f49dafc4cc89025743a2e9b71e315c8746f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Emergent Communication in Interactive Sketch Question Answering | Accept (poster) | Summary: The work focuses on multi-turn sketch-based emergent communication.
Authors propose a novel two-round interactive task, named Interactive Sketch Question Answering (ISQA).
They suggest an architecture and an implementation, based mainly on existing components (MCAN, Fast-RCNN) while incorporating several novel ideas such as 1) dynamically restricting the channel capacity by controlling the number of transmitted pixels, and 2) providing feedback from receiver to sender via focus boxes.
They suggest a triangular evaluation method that seeks a balance between human interoperability and task accuracy.
Strengths: The main strength of this paper is by suggesting a two-turn visual communication game that nicely models the need for two parties to communicate, with partial observability, to solve a task.
In addition, the paper demonstrates a method to achieve a nice balance between interoperability and pragmatism.
The most interesting observation, to my mind, is provided in lines 304-306 where the authors show that when the complexity is too low, the reasoning module cannot infer sufficient useful information in a single round and thus needs to request a more focused information (a clarification question).
The way the architecture is composed and implemented for modeling the problem at hand is not trivial and interesting.
Weaknesses: The authors assess human interpretability using the CLIP model. Doing an actual human survey of the results would be more appropriate.
The interoperability/pragmatism balance is essentially solved by adding a CLIP-based loss that provides additional supervision towards human interpretability, which is not aligned with the intention to model communication emergence.
Experimental datasets are not described in enough detail. For example, it is unclear how the three reported tasks (Yes-No, Number, Other) correspond to the two described datasets.
Results are not totally consistent (for example, in the Yes-No task where the PraGeo is lower than both the geometric and pragmatic models) and more experiments over more datasets seems needed.
Notations and explanations can be further worked out to assist the reader. See some examples in the Questions section.
Maybe worth mentioning references:
Pragmatic inference and visual abstraction enable contextual flexibility during visual communication, by Judith E. Fana, Robert X.D. Hawkins, Mike Wub and Noah D. Goodman,
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In section 3.1 what are the dimensions of H_i and A_i? (explained later)
Line 150 - will be good to stress the fact that b_i is a ratio (explained later)
Line 244 – will be good to explain what proposals are.
Lines 293-297 the x-axis is not easily defined (line 293) and then referred to as 0.1N, 0.3N, which are hard to find in the graphs. Can’t you use the 0.xN scale? or at least add the values you refer to as labels to the x-axis?
Datasets are missing the explanation of complexity/difficulty of tasks, namely yes/no, number and other which you refer to in the figure. A baseline random accuracy can also be helpful to add or mention.
Table-1: is the lower the better? Worth mentioning.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: limitation section is provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and valuable suggestions. Here are our detailed responses.
---
**W1: Human survey**
We agree with your opinion and added a human survey. The human evaluation results are consistent with the CLIP distance. See general response 3.
---
**W2: CLIP-based loss provides additional supervision.**
We think leverage CLIP model is reasonable for four reasons.
- The primary rationale behind striving for human interpretability is to achieve a consensus with humans on various concepts. This ensures that the emergent communication resonates with human understanding and can be intuitively comprehended.
- We view the CLIP model as an exceptional tool for learning human consensus and perceptual abilities. It serves as a teacher for our systems to better align with human cognitive results.
- Previous methods, such as [13], have also leveraged VGG classification models during training. This precedent signifies the utility and relevance of employing such models to drive desired outcomes.
- While the idea of entirely free emergence is indeed exhilarating, we recognize that there's a gap between this ideal and current research paradigms. Our approach is an endeavor to traverse this gap, balancing the excitement of pure emergence with the practicality of human-aligned understandings.
We believe that, through these methods, we maintain the spirit of modeling communication emergence while simultaneously ensuring that the results align with human interpretability."
---
**W3 & Q5: Details about experimental datasets.**
Yes-or-No, Number and Other are different question types in the dataset(according to the answer).
- Yes-or-No means the answer to the question is yes or no. This type of question is easier to solve and mainly focuses on a single object or pattern in the image.
- Number means the answer is Arabic numerals and often requires detailed information.
- Other includes remaining questions, whose answers include more than 3000 alternative words or phrases.
- We will add the related explainations to the updated paper.
---
**W4: Result consistency.**
We confirm that the results are consistent and reliable. The observation mentioned by the reviewer is commonly known as Simpson's Paradox, where overall statistical results are inconsistent with the local statistical results. Yes-or-No questions often require less information and have a higher possibility to overfit and strong randomness since it has only two options.
---
**W5: Further notations and explanations.**
Thanks for your advice! We will correct those issues in the revised version.
---
**W6: References.**
Thanks for your advice! We will add those references and do a more comprehensive literature review in the recognitive community.
---
**Q1**:
The dimension of $H_i$ is $m\times 4$ and the dimension of $A_i$ is $n\times 1$. Where $m$ denotes the number of regions and $n$ denotes the number of alternative answers. 4 means $(x, y, height, width)$ to locate a box in the sketch.
---
**Q2**:
Thanks for your advice, we will modify our updated paper.
---
**Q3**:
The "proposals" are thus a critical part of the Faster R-CNN architecture, as they define the potential regions where objects might be present. Here's how proposals are generated in more detail:
a. Anchors: At each position of the image, the RPN proposes multiple regions of different scales and aspect ratios. These are called anchors. For example, you might use 3 scales and 3 aspect ratios, for a total of 9 anchors at each position.
b. Classification and Regression: For each of these anchors, the RPN predicts two things:
- Whether it contains an object or not (classification).
- Adjustments to the anchor box to better fit the object (regression). These adjustments are typically four real-valued numbers for the coordinates of the top-left and bottom-right corners of the box.
c. Proposal Selection: From all the proposed regions, a set number (e.g., 2000) of the best scoring regions are selected as region proposals using a technique called Non-Maximum Suppression (NMS) which helps in eliminating multiple detections for the same object.
---
**Q4**:
Sorry for inconvenience, we will modify the x-axis in the updated version.
---
**Q5**:
Please see W3.
---
**Q6**:
Thanks for your advice! The lower is the better for the CLIP distance. Besides we will add human survey. See general response 4.
---
Overall, we hope that our responses can fully address your concerns. We will be grateful for your further feedback.
---
Rebuttal Comment 1.1:
Title: I read and acknowledge authors responses.
Comment: Thanks and good luck. | Summary: This paper proposed a new multi-round visual communication task with an interactive system for emergent communication. During the game, the sender needs to sketch on the canvas to communicate a target image, while the receiver needs to answer a question regarding the target image and give feedback on the sender’s sketch. The training framework balances task accuracy, drawing complexity, and human interpretability. Experimental results show that the agents can communicate successfully using sketches and feedback. The emerged sketches can maintain interpretability and high accuracy with lower complexity. And the feedback given by the receiver can effectively enhance the performance.
Strengths: 1. This paper proposed a novel setting where each of the agents can only observe a partial environment that necessitates the feedback of the receiver. And the feedback is smartly provided in a sender-understandable way (bounding boxes). Compared with the previous work, this environment enables bi-directional communication where both agents can “draw” on the canvas.
2. The training framework considers triangle optimization – task accuracy, drawing complexity, and human interpretability.
Weaknesses: 1. Complexity B:
for the complexity in section 5.3, what is the specification for $b_i$ and $h_i$ separately?
It will be interesting to know how which agents contribute more to the efficiency – while achieving high accuracy, is the high efficiency due to the sender drawing less or the receiver giving more accurate feedback?
2. The maximum round: only models trained with two-round are reported. Is there a reason that the maximum round is set to 2? Given more rounds, the performance change can help us understand whether one round of feedback from the receiver is enough to finish the task.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why is the $\sum b_\tau$ given to the sketch model? Would $b_i$ be sufficient?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It will be interesting if the agents can control the complexity of the sketch based on the target image and the receiver’s feedback. Similarly for the number of bounding boxes at the receiver’s side.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your time and valuable suggestions. Here are our detailed responses.
---
**W1:** Which agents contribute more.
Our setting considers a collaboration between the sender and the receiver. Through the proposed feedback mechanism, both agents have equal rights to affect the interaction and either agent cannot be separately considered. The reason why sender can use less complexity to transmit sufficient information is because the receiver gives it an accurate feedback while the accurate feedback constructs on the decent round1 communication. To illustrate, this can be likened to a teacher identifying key contents for students prior to the final exam. By concentrating efforts within key contents, students are able to achieve commendable results with less time. Meanwhile, the reason why the teacher accurately point out these key contents is because the teacher communicate sufficiently with the students, thereby gaining understanding into students strength and weakness of the curriculum.
Overall, the both components collaborate to deliver a superior performance.
---
**W2:** Maximum round.
2-round interaction is enough to finish the task. The 3-round model provides slightly lower performance compared to 2-round model. There are two reasons why 3-round can not improve performance: First, a 3-round approach will reduce the complexity budget assigned to the first and will reduce the accuracy of the feedback module(See line 304-306 for reasons). Second, each round of interaction requires an extra complexity for the feedback message, causing the complexity assigned to drawing decreases.
---
**Limitation:**
Thanks for your advice! We will explore how to impel the sender and receiver to control complexity in future work.
---
Overall, we hope that our responses can fully address your concerns. We will be grateful for your further feedback.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal. It would be better that the clarification or further experiments on the maximum round can be added to the revision. I will keep my rating unchanged and recommend accepting this paper.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply!
Comment: Thanks for your kindly reply! Since the further experiments on the maximum rounds takes a lot of time and computation resources, we will add them to the updated version of the paper. | Summary: In this paper, the authors proposed a new task about emergent communication by tackling visual question answering as an iterative sketch question answering process. The authors proposed a three-factor evaluation metric, including question answering performance, drawing complexity and human-interpretability. A framework consisting of Sender and Receiver is proposed to perform multi-round interaction to tackle the proposed task. VQAv2 is used for empirical evaluation of the proposed framework.
Strengths: 1. The problem setting is very interesting.
2. The proposed method is intuitive and straightforward.
3. The paper is presented clearly and easy to follow.
Weaknesses: 1. The new insight is limited.
a. Visual question answering is indeed a new task compared to classification. But what is unique in terms of emergent communication when visual question answering is used as the target task? From the current manuscript, there is not really a metric and any empirical evidence showing the improve communication quality over [14].
b. Despite the authors target at multi-round interaction, the two settings evaluated are one-round and two-round.From the visualizations, the sketch used is usually the sketch of the main object. There doesn't seem to be a pattern in terms of communication with only one/two round of communication.
c. More fundamentally, how does communication emerge and how does communication gets better/more efficient when the task is more complex? The reviewer feels these fundamental questions still left unsolved and the current manuscript didn't show any potential of helping solve these problems.
2. Empirically, current evaluation is not sufficient enough. Currently, the communication quality is mainly measured through automatic metric like CLIP-based score. There should be some quantitative analysis verifying the correlation between automatic score and manual measurement.
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: Please check the weakness for details.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 3 good
Contribution: 1 poor
Limitations: Need more discussion on the fundamental questions of emergent communication.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer APpT
Thank you for your time and valuable suggestions. Here are our detailed responses.
---
**W1:** a) Uniqueness of the proposed task? Comparison to classification. Comparison with [14]. b) Pattern with one/two round communication. c) Communication gets better when the task is more complex?
**a)** We address this point from three aspects:
1) The uniqueness of the proposed visual question answering task for emergent communication are three-folds: information disparity, feedback mechanism and triangle evaluation; see more details in General Response 1.
2) The proposed visual question answering task setting is better at promoting emergent communication than the classification task, because it can deliver a higher resemblance to human communication in the real world. Human interactions go far beyond merely classifying images; more often, they involve bilateral communication to achieve a complicated task. From this perspective, our question-answering task is more closely aligned with daily human interactions than the common Lewis' game in previous works.
3) It is hard for us to experimentally compare with [14], because the aims and task settings are different and incomparable; see more details in General Response 3.
**b)** There is one distinct pattern in 2-round communication: for the same image, each question lead to a unique sketch in the second round, reflecting the interaction successfully helps the sender focus on question-related content. As depicted in Figure 7, it is evident that for disparate questions, the weight assigned to different regions varies considerably, leading to a prioritization of pixel transfer from lighter areas. For question "“How many people can you see in the picture?", the sketch transferred in the second round will focus on the people while for question "Is it still snowing in the picture?", the sketch transferred in the third round will be presented in a more global fashion. The fact shows that the sketch in the second round depends on the question and definitely will have different patterns when compare with round 1.
**c)** Through making the task more complex, this work shows a lot of potentials to promote emergent communication. From an intuitive aspect, it is natural for a more complex task to promote emergent communication, because there is no need to create a sophisticated linguistic system if the task for collaboration is very simple. This work provides a recipe for designing a complex task through creating information disparity, designing feedback mechanism and proposing triangle evaluation. From an experimental aspect, we have found a series of interesting patterns; see more details in General Response 2.
---
**W2:** Verify the correlation between automatic score and manual measurement.
To verify the correlation between CLIP and manual measurement, we conduct a human evaluation, where 12 participants were tasked with assessing the human-interpretability of three different models. Participants were presented with three corresponding images from each model simultaneously and asked to score them based on judgment criteria ranging from 1 to 5 (higher score indicating better interpretability); see Table.1 in rebuttal PDF. We sampled 3,000 sets of images consistent with the settings in Table 1 of the paper. Each set comprised images generated by the Pragmatic, PragGeo, and Geometric models from the same RGB images.
Table.2 in rebuttal PDF compares human interpretability score and CLIP distance. We see that: 1) the results in human experiments and the CLIP distance are consistent to show the order of Geometric, PragGeo Pragmatic in terms of human interpretability; 2) both PragGeo and Geometric models provide an average human interpretability score between partially understand and mainly understand while the sketches provided by Pragmatic models can not be understood; and 3) Comprehensively considering with Fig.3, the PragGeo achieves a more optimal balance among the task performance, drawing complexity and human interpretability.
---
Overall, we hope that our responses can fully address your concerns. We will be grateful for your further feedback.
---
Rebuttal Comment 1.1:
Title: Concerns remain
Comment: Thanks for the rebuttal.
1. The reviewer understands that the proposed model is not directly comparable with [14]. This doesn't necessarily mean that comparing with other tasks is not feasible. For example, the authors could build a classification version of the dataset by only using the labels of object categories of the COCO dataset where VQA is built upon. Further degraded version of the tasks could be built by choosing different ways of communication (sketch or binary). Based on this, apple-to-apple comparisons can then be done.
2. The reviewer still thinks two rounds are not enough to really show the complexity of emergent communication. Changing the focus the sketch based on the feedback doesn't necessarily reflect the complexity of communication. The reviewer thinks there is some relationships between the communication bandwidth (B) and the number of rounds towards accomplishing the task. The current evaluation really is too limited in terms of one important aspect of communication: communication is done by modeling a sequence of signals.
3. The reviewer also has some follow-up questions on the human evaluation. How is the evaluation done exactly? Is there any training process for the human evaluators? How is each criteria defined?
4. Can the authors comment on fundamental questions of emergent communication and which are directly answered in this paper?
---
Reply to Comment 1.1.1:
Title: Thanks for your reply
Comment: 1. We appreciate the reviewer's wonderful suggestions about the comparison experiments. We agree that a fair comparison with the iconic previous work [14] would be great for the community. We aim for our work to serve as an expansion and exploration of an informative feedback of emergent communication, thus allowing a more comprehensive consideration of emergent communication together with [14]. Therefore,
- For ISQA task with the binary flag, we will add experiments immediately.
- For the degraded version of the task, it is hard for us to finish the model training and evaluation before Aug. 21. But we will add those experiments to the updated version of paper.
---
2. If we understand the reviewer right, the reviewer means that more-round communication, a better setting to promote emergent communication. We agree with the reviewer and this is exactly why we promote multi-round interaction and propose the feedback mechanism.
- Our method supports more than 2 round communications. We are working on the experiments for 3 and 4 rounds. Since it will take about one week for the train and evaluation, we will add the experimental results to the updated version of our paper.
- In this work, we set an upper bound for the number of communication rounds. Like in Avalon game, there are at most 5 tasks. The reason we set an upper bound is to make the setting friendly for time and computation. In ISQA, some questions with more than 3000 options, are very challenging even for humans. Continuously gaming without limits on communication rounds, can be both time-intensive and computationally demanding. In [14], the maximum communication rounds is 7, which is impressive, but there remains a ceiling. Additionally, the task in [14] focuses on classification, which is less GPU-intensive than question-answering.
- Both the number of rounds and the message content are important for emergent communication. In terms of feedback messages, our approach expands [14]. Instead of using a binary flag, we employ continuous feedback messages that offer more detailed information.
---
3.
- Evaluation: Participants see images including the RGB image and images generated by our Pragmatic, PragGeo and Geometric models simultaneously. Then they provide a score for the three generated images according to our criteria and their intuition.
- Training: We provide a thorough explanation of our criteria to all participants, guiding them to evaluate based on their natural human intuition. We provide basic training for a few trials with the rating software.
- Criteria:
- Understand all the content as well as RGB images.
- Understand a major part of the content, while not as clear as the RGB image.
- Understand some part of the content.
- Understand only few part of the content or finding few common features in the raw image and the sketch.
- Totally not understand anything.
---
4. Three fundamental questions of emergent communication are directly answered:
- **What is a prerequisite for interaction in emergent communication?**
Information disparity is a prerequisite for interaction. Fig. 7 shows the feedback for the same image according two different questions. We see that our feedback transfers question-related information to the sender, enhancing communication efficiency via gradient-informed region of interest (can be displayed as sketch as shown in Figure 2), which enables a human-like multi-round interaction for the first time in visual emergent communication. This provides an insight that one of the reasons why interaction emerges might be querying task-aligned information when receiver is more acquainted with the object than sender.
- **Can multi-round interaction (feedback) promote more efficient communication?**
Yes, through multi-round interaction, tasks can be accomplished more effectively while using less communication resouces. Fig. 3 shows two facts: i) when the 2-round and 1-round models have similar performance when $B > 0.2N$; and 2) the 2-round model can achieve superior ISQA accuracies compared with the 1-round model when the same complexity and $B \in (0.01N,0.2N)$. These observations suggest that without communication constraints, multi-round communication do not necessarily provide more information than a single round. However, multi-round communication can optimize the use of communication resources and lead to more efficient exchanges.
- **What is a prerequisite for multi-round interaction (feedback) to be beneficial?**
For feedback to be effective, the receiver needs sufficient input information. Fig. 3 shows that the 2-round model does not have an advantage against the 1-round model when complexity constraint is too low ($B < 0.01N$). We see that only the feedback based on a minimal complexity requirements can be beneficial. This emphasizes that effective feedback depends on having sufficient background knowledge and essential preliminary information, a principle that resonates with broader human societal values. | Summary: The authors present a new problem setup for sketch-based emergent communication, distinguishing itself from existing work primarily through communication taking place iteratively over multiple rounds. The authors also argue that the reliance on downstream tasks for evaluations allows for communication protocols to develop which are not necessarily easily interpretable by humans, and thus fail to fulfill this important goal of EC research. Evaluating the behavior of various design choices, the authors show that they can prioritize different aspects of the problem: performance, but also human interpretability and drawing complexity (automated metrics)
Strengths: - The idea of multi-round EC is very exciting! I would love to see research move in this direction and, given just the difficulty of agents who learn when to talk (and talk with resonable sparsity), I imagine there are plenty of interesting problems to solve in that space.
- On that topic, the ability for the sketch model to generate very different sketches using the same image (when the question demands it) is demonstrated here and is a perfect example of what I would expect as sort of a main contribution from an EC model in this space.
- The problem setup and task are novel
Weaknesses: - I won't dwell on this too much since it's primarily a track issue, and the paper could be considered by other merits, but it is difficult for me to consider this emergent communication at all. It does however mean that a lot of EC motivation cited here doesn't seem very relevant upon reaching the experimental design section and understanding the learning problem.
- While the authors argue that existing work relied too much on downstream tasks for evaluation, regardless of this point, evaluating downstream did serve an important purpose in that it helped demonstrate some potentially useful application of the learned protocol. Here the protocal seems rather contrived. Of course something like a referential game is also rather contrived and I concede that point. However, I'm willing to accept contrived environments if what emerges in the language is itself interesting and gives us some insight on what sorts of less contrived environments we may consider in future work.
- The drawer is vastly simplified when compared to the existing visual referential game work (cited here). The authors state, "Vision-based EC systems have been developed with the aim of imitating the pictographic systems that were used in early human communication", but a pictographic system tends to abstract important visual features, sometimes caricaturing them for the purpose of clarity in communication. I'm not convinced that this drawer is an appropriate substitute for this process. If I want a giraffe drawn in 3 strokes vs. 8 strokes, we see the important visual features that are most characteristic of the giraffe. If we are essentially revealing areas of an edge/depth-detected version of a real image, it seems very different. From the examples of the sketches produced by various modes (pragmatic, geometric, prageo), none strike me as very similar in creating some simplified version of the original high-res image of the object, and a case should be made why this process could be considered an imitation of those systems in early human communication.
- While generating very different images from the same image when the question differs
- Is the binary flag model of [14] really that different from what occurs in this work? Of course, time not considered, the listener would like to continue receiving new information until the end. That seems like optimal policy. So whether the listener conveys to the speaker what it would like to see, or if the speaker already has a priority order in which it would reveal / detail more parts of the image, that's not a hugely important distinction in my mind, unless the speaker and listener have very different perceptual abilites, or goals in mind. It would of course be good to be user-centric in many cases, but how important is it? I would have liked to see a comparison.
- In comparison to existing work, and bearing in mind the emphasis on human interpretability in the paper narrative, I would have liked to see this method compete with [13]/[14] with a human substituting as the listener, or at least trying to solve the task (and perhaps no communicating). Without being able to play with the models directly, the previous work seems more interpretable with fewer strokes. I really find it surprising that humans aren't involved in the measuring of human interpretability, and I think that fact hints that there may be a more suitable name for what is being measured.
Overall I think there is some promising work in how the task is setup, but deviations from the sketch model and the region-based (rather than complexity-based) of existing work seem like steps backwards. No direct comparisons to previous work, or adaptations of existing work to this, is detrimental both to placing it in the larger research context, and understanding the relative strengths/weaknesses of the proposed approach.
Other comments:
Paragraph 1: These claims seem speculative / opinion-based.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1:
First, our work falls into the topic of emergent communication. Emergent communication aims to facilitate human-like communication between intelligent agents, as delineated in [1,2,3,4]. In this work, we propose a novel question answering setting to promote multi-round, bilateral, reciprocal communication between a pair of a sender and a receiver, which clearly follows the definition of emergent communication. See more details in General Response 1.
Second, our motivation is to promote multi-round interaction, and our experiments can support this motivation from two aspects:
1) Quantatively, Fig 3 shows the downstream performance as a function of the communicaton complexity. We see that the 2-round communication significantly outperforms the 1-round communication across a vast communication complexity. This validates that multi-round interaction promotes higher communication quality.
2) Qualitatively, Fig. 4 and Fig. 7 illustrates the step-by-step process of the interactive communication. We see that our feedback mechanism enables the sender to pay more attention to the specific areas according to different questions from the receiver, guiding the communication to focus on question-critical area. The experimental findings validate that our multi-round setting simulates human-like multi-round conversation.
---
W2:
- We agree with you that the evaluation of downstream tasks is important, but only relying on downstream tasks would cause non-interactive and non-interpretable issues; see more details in General Response 1. In comparison, the triangular evaluation is a more comprehensive evaluation metric, see more details about the triangular evaluation in General Response 1.
- We are not sure how the reviewer defines whether a protocol is contrived or not. But our task setting is more natural and better at promoting emergent communcation than previous works, because it can deliver a higher resemblance to human communication in the real world. Human interactions go far beyond simple games, such as classifying images; more often, they involve bilateral, interative communication to achieve a complicated task. From this perspective, our multi-round question-answering task is more closely aligned with daily human interactions than the common Lewis' game in previous works.
- Our work provides three interesting insights: the prerequisite, the promoter and the condition of interaction in communication. see more details in General Response 2.
---
W3:
We simplified the drawer for two reasons.
- First, existing stroke-based non-iterative drawing method cannot produce sketches with sufficient quality to satisfy the basic needs in ISQA. The sketches provided by [13] can only contain a blurry outline of the major object and can not contain enough information to solve question like "What page is displayed on the monitor?"; see table 1 in [13]. Only iterative methods such as [40], have yielded visually satisfactory results. However, iterative methods are time-consuming, often requiring several minutes to generate a single sketch.
- Second, our motivation focuses on multi-round interaction, which is an orthogonal direction to drawing. our work emphaisze multi-round interaction, which establises a mechanism that both sender and receiver should be capable to send informative message to each other and guide the direction of the communication. The proposed multi-round interaction framework is also compatible with stroke-based drawing methods. Once a more mature stroke-based drawing method is developed, we can substitute the drawing module in the current framework and test its communication quality.
Besides, if we consider the AI agents as an intelligence who lives in a discrete world, it is reasonable to consider dot as AI agents' stroke.
---
W4:
I guess you mean "why generating". Intuivetively, Different questions would require different visual information to provide the corresponding answers, leading to diverse sketches. This is actually one important insight we learnt from experiments. For example, in Figure 7, a representative image is subjected to two distinct questions. The first inquiry, "How many people are visible in the image?", inherently emphasizes the concept of "people". Consequently, regions within the sketch depicting individuals take precedence, as they are more pertinent to the posed question, commanding greater weight in the question-answering module based on the attention network. Conversely, when the question shifts to "Is it still snowing in the image?", the focus broadens to a more global perspective. This is attributed to the absence of a salient region garnering specific attention within the question-answering module for this particular question.
---
W5:
- Yes, the feedback mechanism between [14] and this work is significantly different; see more details in General Response 3.
- If we understand the reviewer correctly, the reviewer is considering a comparison between sender-centric and the receiver-centric communication. However, different from these two settings, our setting considers a bilateral, reciprocal communication. The sender and the receiver have equal rights and collaboratively finish the QA task. The sender and the receiver share the same target while neither of them has fully integrated information to reach the target.
---
W6:
- Our motivation is to promote multi-round, bilateral interaction for emergent communication, instead of creating better drawing merely from the sender perspective; also see W3.
- We cannot directly compare our work with [13,14], because they are targeting different tasks. Even [13] and [14] have not been directly compared. In this case, a forceful comparison is unfair and cannot provide any useful conclusion.
- Human experiments: We conduct two human experiments to validate 1) human can be a good receiver (but slightly worse than our model); and 2) CLIP distance and human perception are consistent; see General Response 4. | Rebuttal 1:
Rebuttal: **General Response 1: Uniqueness of the proposed ISQA task for emergent communication:**
We propose a novel task setting, whose goal is to promote multi-round, bilateral, interactive communication between a pair of a sender and a receiver. To achieve this goal, our proposal has three unique characteristics: information disparity, triangle evaluation, and feedback mechanism.
- Information disparity. Our setting simulates the question answering between two people. Without communication, neither the sender nor the receiver can accomplish this task alone. Specifically, the sender has the source information (image), but does not know the target information (question); and the receiver has the target information (question), but does not know the source information (image). Therefore, such an information disparity creates a necessity to communicate. Note that most previous works [1, 13, 14] do not create such an information disparity.
- Feedback mechanism. We promote bilateral communication by allowing the receiver to provide feedback to the sender. This simulates multi-round conversation between two people. With the feedback mechanism, both sender and receiver are equal rights to send informative message to each other and guide the direction of the communication. Note that most previous works predominantly concentrate on single-round, sender-receiver architecture. Therefore, we extend emergent communication into a multi-round setting, seeking insights into the impact of the feedback mechanism for the emergent communication process.
- Triangle evaluation. We ensure high-quality communication by designing three criterion. Given the criteria of the ability to handle downstream tasks, the communication content between the sender and the receiver has to be informative. Given the criteria of communication complexity, the communication content has to be compact. Given the criteria of human interpretability, the communication content has to be intuitive. These three factors together promote high-quality communication.
---
**General Response 2: Insights brought by the proposed ISQA task:**
This work provides three insights about how interaction emerges in communication.
- First, information disparity is the prerequisite for interaction. Fig. 7 shows the feedback messages for the same image according two different questions. We see that our feedback message transfers question-related information to the sender, enhancing communication efficiency via a gradient-informed region of interest (can be displayed as sketch as shown in Figure 2), which enables a human-like multi-round interaction for the first time in visual emergent communication. This provides an insight that one of the reasons why interaction emerges might be querying task-aligned information when receiver is more acquainted with the object than sender.
- Second, the constrain of communication complexity promotes interaction. Fig.3 shows the ISQA performance comparison between 2round and 1round communication. We see that the downstream performance can be boosted when interaction is introduced to the visual communication pipeline. This provides a insight that the complexity constrains promote the agents to pursue a more efficient communication leads to interaction.
- Third, it requires sufficient input information for the feedback to be functional. Fig 3 shows that functional feedback from the receiver relies on adequate communication complexity. This offers the crucial realization that precise feedback hinges on a foundational shared understanding.
---
**General Response 3: Comparison with [14]:**
- Methodologically, the way to design the feedback mechanism is significantly different. The feedback in [14] is a binary signal (yes or no), indicating the receiver's confidence level in rendering a decision. In comparison, the feedback in our work is a region of interests, pointing out which parts of the sketch should get higher attention in the next round.
- Quantatively, the communication quality between [14] and our method can not be directly compared, because the aims and task settings are different and incomparable. [14] focuses on the generation and evolution of pictography and the task it leverages is traditional Lewis' game to create symbols for each class. It is not designed to solve our visual question answering task and cannot execute the visual question answering task. In turns, our work cannot execute Lewis' game as well. So far, there has been no precedents to compare results across different tasks in emergent communication.
---
**General Response 4: Human Experiments to verify CLIP measurement:**
To verify the correlation between CLIP and manual measurement, we conduct a human evaluation, where 12 participants were tasked with assessing the human-interpretability of three different models. Participants were presented with three corresponding images from each model simultaneously and asked to score them based on judgment criteria ranging from 1 to 5 (lower number indicating better interpretability); see Table.1 in rebuttal PDF. We sampled 3,000 sets of images consistent with the settings in Table 1 of the paper. Each set comprised images generated by the Pragmatic, PragGeo, and Geometric models from the same RGB images.
Table.2 in rebuttal PDF compares human interpretability score and CLIP distance. We see that: 1) the results in human experiments and the CLIP distance are consistent to show the order of Geometric, PragGeo Pragmatic in terms of human interpretability; 2) both PragGeo and Geometric models provide an average human interpretability score between partially understand and mainly understand while the sketches provided by Pragmatic models can not be understood; and 3) Comprehensively considering with Fig.3, the PragGeo achieves a more optimal balance among the task performance, drawing complexity and human interpretability.
Pdf: /pdf/4e6b37d06cfec615433672260287833a6d64998f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Parallel-mentoring for Offline Model-based Optimization | Accept (poster) | Summary: This paper introduces a novel offline model-based optimization (MBO) framework that incorporates a fine-tuning strategy for ensemble proxy models. Instead of directly using three independent proxy models, the paper suggests a "voting" strategy. In this scheme, based on the majority order of two (near) inputs, each model is fine-tuned using binary cross-entropy (BCE) loss before it is utilized for the optimization problem. The paper contends that this approach enables the proxy models to mentor each other, thereby leading to improved performance during the input optimization stage.
Strengths: - The paper is well-written and easy to follow.
- The experimental results are evaluated extensively, as well as showing quite a good performance compared with prior offline MBO methods.
- To best of my knowledge, exploring fine-tuning strategy to improve the performance of ensemble models is relatively under-explored in offline MBO literature. In this respect, I think this paper investigates important but overlooked problem.
Weaknesses: - My primary concern revolves around hyperparameter selection. It appears the method introduces a considerable number of additional hyperparameters, including, but not limited to, learning rates for fine-tuning soft-labels and proxy models, the variance $\delta^2$ for selecting an adjacent input from the current solution, and the number of near inputs, $K$. Although the authors demonstrated the stability and effect of these hyperparameters (e.g., $K$), an analysis of some other hyperparameters, such as $\delta^2$ or fine-tuning learning rates, is still missing. Given that the paper addresses "offline" MBO, the experiments should detail how these hyperparameters can be determined (e.g., $\delta^2$ might be defined as a function of input dimension or a constant). Alternatively, the authors should demonstrate that the proposed method is robust to variations in these hyperparameters.
- The proposed fine-tuning strategy lacks the original regression objective; the model might produce inaccurate values at training points after fine-tuning.
- Although it's not a major weakness, any theoretical guarantee or supporting intuition would significantly enhance the manuscript.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The paper deals with discrete tasks as well, while the optimization strategy is based on gradient ascent. How the input optimization is performed?
- I wonder if the authors tried to incorporate the method on prior offline MBO method, e.g., COMs to pre-trained the model first and then fine-tune models via the proposed fine-tuning strategy.
- I wonder if the model is fine-tuned sequentially or not. Specifically, once $f_A$ is fine-tuned, then is fine-tuned $f_A$ used for fine-tuning $f_B$ or the original $f_A$ before fine-tuning used?
- Why the fine-tuning step is set as 1? Did the authors try more steps?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discussed the limitations and potential negative impact in their manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
> My primary concern revolves around hyperparameter selection.
Please refer to the global response "On Additional Hyperparameters".
> The proposed fine-tuning strategy lacks the original regression objective; the model might produce inaccurate values at training points after fine-tuning.
Thank you for pointing out the potential pitfalls of our fine-tuning strategy concerning the original regression objective. Emphasizing soft-labels can indeed shift predictions at the original training points. However, our primary goal with adaptive soft-labeling is broader: it aims to enhance the model's generalization, especially towards out-of-distribution points around the current optimization point. This might sometimes sacrifice pinpoint accuracy on training points but significantly improves the model's ranking ability and adaptability beyond the training set.
To emphasize, **our model still respects the regression objective**. The soft-labels $\boldsymbol{\hat{y}}^{S}$, integral to the fine-tuning, are tied to the static dataset and refined to minimize the mean squared error, as depicted in Eq.(9) of our paper:
$$\boldsymbol{\hat{y}}^{S'} = \boldsymbol{\hat{y}}^{S} - \frac{\lambda}{N} \frac{\partial \sum_{{i=1}}^{N}(f^A_{\boldsymbol{\theta}(\boldsymbol{\hat{y}})^S}(\boldsymbol{x}_i) - y_i)^2}{\partial (\boldsymbol{\hat{y}}^{S}){^\top}}.$$
In short, our model finds a balance between adaptability from soft-labeling and the foundational accuracy of the original regression, resulting in a versatile and robust ensemble for real-world scenarios.
> Although it's not a major weakness, any theoretical guarantee or supporting intuition would significantly enhance the manuscript.
We have illustrated the motivation of our paper in Figure 1. The underpinning idea behind tri-mentoring is to harness the voted ranking signal to fine-tune the proxy. This is clearly depicted in Figure 1 of our paper. Here, we maintain three parallel proxies, namely fA, fB, and fC. Suppose we have two designs x1 and x2 in the neighborhood of the current optimization point. If proxies fA and fB concur that the score of x1 is larger than that of x2, while fC disagrees, we follow the majority voting principle. In this scenario, the agreement between fA and fB provides a more trustworthy ranking. Their voted ranking signal fV(x1) > fV(x2) is then used to guide or 'mentor' fC, thereby enhancing its predictive performance.
## Questions
> The paper deals with discrete tasks as well, while the optimization strategy is based on gradient ascent. How the input optimization is performed?
You are correct to point out the discrete nature of the tasks we've addressed. For input optimization, we adopt an encoding-decoding strategy as per the design-bench [1]. This allows us to transform the discrete space into a continuous one that we can perform gradient ascent on. After obtaining optimized continuous values, they are decoded back to the original discrete space. This approach lets us effectively leverage the gradient ascent for optimization.
> I wonder if the authors tried to incorporate the method on prior offline MBO method, e.g., COMs to pre-trained the model first and then fine-tune models via the proposed fine-tuning strategy.
Thank you for the insightful inquiry on integrating conservatism (COMs) and Gaussian priors (ROMA) with our tri-mentoring.
In brief, our additional experimental results on Ant and TFB8 tasks are:
| Method | Ant | TFB8 |
|--------|------|-----|
| COMs | 0.856 | 0.496 |
| Tri-mentoring | 0.948 | 0.970 |
| COMs + Tri-mentoring | 0.941 | 0.960 |
| ROMA | 0.914 | 0.917 |
| Tri-mentoring | 0.948 | 0.970 |
| ROMA + Tri-mentoring | 0.945 | 0.971 |
While COMs combined with tri-mentoring leads to a slight drop in performance, the ROMA and tri-mentoring combination retains similar outcomes. The dip with COMs might arise from its lower scoring for out-of-distribution designs, potentially conflicting with the tri-mentoring's majority voting mechanism for odd designs.
On the other hand, ROMA, promotes smooth proxy models. Theoretically, this characteristic should be orthogonal to our tri-mentoring. The absence of improvement might suggest that our current benchmark is not sufficiently challenging to truly exploit the combined strengths of both ROMA and tri-mentoring. We'll ensure to incorporate these analyses into Appendix.
> I wonder if the model is fine-tuned sequentially or not. Specifically, once fA is fine-tuned, then is fine-tuned fA used for fine-tuning fB or the original fA before fine-tuning used?
As for the fine-tuning process, it is not performed sequentially. Instead, we generate consensus labels using the three proxies, following which each proxy is independently fine-tuned using these consensus labels and the adaptive soft-labeling method.
> Why the fine-tuning step is set as 1? Did the authors try more steps?
Thank you for raising the question regarding the number of fine-tuning steps, specifically why we have set it to 1.
We select a single step primarily for efficiency. Moreover, our comprehensive experiments indicate that increasing the number of fine-tuning steps does not significantly impact the model's performance, confirming the method's robustness in this regard. Below are our normalized results on TFB8 and Ant for 1, 2, 3, and 4 fine-tuning steps, respectively. These results are normalized by the performance with just one
step:
- TFB8: [1.000, 0.986, 1.006, 0.992]
- Ant: [1.000, 0.994, 1.000, 0.985]
As seen, the performance remains consistent across different numbers of steps.
## Overall
Have we addressed your concerns adequately? Your feedback is invaluable, and we await further discussion. Thank you.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the detailed response. The provided response addressed most of my concerns well. Hence, I raise my score to 7. Please add the related discussion in the final manuscript if the paper is accepted.
---
Reply to Comment 1.1.1:
Title: Thanks for your prompt feedback and continued support.
Comment: Thank you for your prompt feedback and the adjusted score. We are glad the response addressed most of your concerns. Rest assured, the related discussion will be diligently added to the final manuscript, as suggested, if the paper is accepted. | Summary: The aim of the paper is to tackle the out-of-distribution problem in offline model-based optimization methods. To this end, the authors drew inspiration from recent studies that proposed better ensemble proxies and weak ranking supervision signals. They proposed a new approach called parallel-mentoring, which consists of two main modules. The first module, a voting-based pairwise supervision module, generates consensus labels. The second module, an adaptive soft-labeling module, aims to learn more accurate soft labels. The authors conducted experiments on both continuous and discrete tasks, and their approach achieved the highest ranking in the results.
Strengths: 1. The paper proposed a new approach, which settles the out-of-distribution problem in offline model-based optimization.
2. The paper is clear and easy to understand.
3. The paper conducts extensive experiments on the design-bench.
Weaknesses: 1. The research question is poorly defined, and the statement of the importance of the research is lacking. The out-of-distribution problem is not clearly defined, and its significance is not convincingly conveyed.
2. There is no experimental evidence to support the assertion that the tri-mentoring approach can effectively address the out-of-distribution problem.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. How does leveraging the knowledge of the static dataset help to learn more accurate soft-labels?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## General Reply
We greatly appreciate your detailed feedback and insights. We're committed to addressing each point raised thoroughly. Thank you.
## Weaknesses
> The research question is poorly defined, and the statement of the importance of the research is lacking. The out-of-distribution problem is not clearly defined, and its significance is not convincingly conveyed.
Our sincere thanks for your insightful comments. The primary research question of our work revolves around offline model-based optimization (MBO), a topic we detail in our "Preliminaries: Gradient Ascent on Offline Model-based Optimization" section. It aims to maximize a black-box objective function using a static dataset of designs and scores.
We acknowledge your point regarding the necessity to highlight the importance of our research. As a remedy, we plan to emphasize this by adding the following text after the sentence "Designing new objects or entities to optimize specific properties is a widespread challenge, encompassing various domains such as materials, robots, and DNA sequences." in Line 25: "A more effective solution to this challenge holds enormous potential benefits for society, such as facilitating the proposal of new superconductors to reduce energy costs, enabling the design of faster robots for service, and advancing new drug discoveries to combat diseases."
Regarding the out-of-distribution problem, it was our aim to illustrate its significance from Line 101 to Line 103: "However, this method faces a challenge with the proxy being vulnerable to out-of-distribution designs. When handling designs that substantially differ from the training distribution, the proxy yields inaccurate predictions." We will ensure to make this point clearer in our revision to make the concept and its importance more accessible.
> There is no experimental evidence to support the assertion that the tri-mentoring approach can effectively address the out-of-distribution problem.
Thank you for raising concerns regarding the experimental validation of our tri-mentoring approach's efficacy against out-of-distribution issues.
To directly tackle this concern, we have performed additional experiments on TFBind8 and TFBind10. We choose the two tasks as we could easily identify high-scoring designs and thereby create an out-of-distribution test set. From each dataset, we sample 1000 high-scoring designs to form our OOD test sets. For these, our tri-mentoring ensemble have been tested over 5 runs against a mean ensemble and a single proxy, and the mean squared error (MSE) is used as the metric. Here are the results:
- **TFBind8**:
- Tri-mentoring ensemble: MSE of $0.0537 \pm 0.0008$
- Mean ensemble: MSE of $0.0549 \pm 0.0006$
- Single proxy: MSE of $0.0660 \pm 0.0012$
- **TFBind10**:
- Tri-mentoring ensemble: MSE of $0.2271 \pm 0.0075$
- Mean ensemble: MSE of $0.2669 \pm 0.0084$
- Single proxy: MSE of $0.2776 \pm 0.0185$
These results illustrate that the tri-mentoring ensemble consistently outperforms the other models in handling OOD designs. We'll incorporate these experiments and analysis into Appendix.
These findings, coupled with our main manuscript and appendices, offer a comprehensive demonstration of our method's efficacy. Specifically, in Line 234, we highlight, "(1) Table 1 demonstrates *tri-mentoring* attains the best results across the board, highlighting its effectiveness. Its consistent gains over Grad confirm its ability to tackle the out-of-distribution issue." Furthermore, in Appendix A.3 "Accuracy of Pairwise Consensus Labels," we illustrate that a majority of the consensus labels are accurate, underscoring our approach's competency in addressing the out-of-distribution problem.
In light of this evidence, we firmly believe in the capability of our tri-mentoring approach to effectively address the out-of-distribution problem.
## Questions
> How does leveraging the knowledge of the static dataset help to learn more accurate soft-labels?
We greatly appreciate your question. The primary benefit of utilizing the static dataset comes from the commonality that it shares with the soft labels. Despite having different data distributions, the soft-labels and the static dataset share underlying similarities, as they both represent the same ground-truth from pairwise and pointwise perspectives, respectively. This inherent connection allows for a rich exchange of information between the two, significantly benefiting the learning of more accurate soft-labels.
More specifically, a proxy fine-tuned with soft-labels is expected to perform well on the static dataset due to this commonality. We thus leverage the regression loss of the static dataset to guide the optimization of soft-labels using bi-level optimization. This strategy is based on our novel recognition of the shared underlying truth between the static dataset and the soft-labels, making our method uniquely positioned to capitalize on it. The specific discussion about this methodology can be found in our manuscript from Line 158 to Line 168.
## Overall
Have we addressed your queries satisfactorily? Your feedback is invaluable, and we're keen on further discussions. Thank you.
---
Rebuttal Comment 1.1:
Title: Looking forward to your feedback
Comment: We sincerely appreciate the time and effort you have invested in reviewing our work. We have made diligent efforts to address all the concerns you listed, including:
- Clarified the research question and the out-of-distribution problem and emphasized the significance of both.
- Provided experimental evidence to demonstrate the efficacy of our tri-mentoring approach in handling out-of-distribution issues
- Explained how leveraging the knowledge of the static dataset aids in the learning of more accurate soft-labels
Could you please provide further clarification if there are any aspects that remain unclear? In the past week, we have received constructive responses from the other three reviewers. We value your perspective and are eagerly looking forward to your feedback. We stand ready to make any necessary revisions to enhance our paper. | Summary: This paper introduces an innovative study that revolutionizes offline model-based optimization (offline MBO) by introducing a novel ensemble method. The proposed approach addresses a critical challenge in offline MBO, namely the handling of potentially inaccurate pseudo-labels generated by proxy models, particularly in out-of-distribution scenarios. It achieves this by harnessing the power of multiple parallel proxies, utilizing voting-based supervision, and incorporating adaptive pseudo-labeling.
One of the notable strengths of this method is its intuitive ensemble framework, which effectively mitigates the inherent inaccuracies of proxy models. Despite the technical complexity and meticulous design tailored specifically for the offline MBO task, this approach demonstrates significant performance improvements across various benchmark datasets.
I recommend accepting this paper with some minor revisions.
Strengths:
1. This paper presents a pioneering method that effectively mitigates the issue of proxy model inaccuracy in out-of-distribution scenarios, offering a novel solution to this longstanding challenge.
2. The bi-level formulation utilized in this study demonstrates technical robustness and holds great potential for widespread application across various domains, highlighting its versatility and reliability.
3. The approach of employing a three-proxy ensemble may appear simple at first glance, but its effectiveness in improving results is remarkable, showcasing the power of this streamlined yet impactful technique.
4. The performance of the proposed method shows great promise, achieving impressive results not only at the 100th percentile but also at the 50th percentile, indicating consistent and reliable advancements across the entire distribution.
5. The ablation study presented in this paper is exemplary in its clarity and thoroughness, providing a comprehensive analysis of the individual components and their contributions to the overall method, enhancing the scientific rigor and understanding of the approach.
Weaknesses: 1. The analysis lacks consideration for ensemble techniques, which could have enhanced the effectiveness of the offline MBO. Ensembles, being statistical methods, offer the opportunity for deeper analysis and intuition, especially from a statistical perspective. Incorporating such analysis and intuition into the manuscript would greatly improve its quality.
2. The algorithm appears to be overly complex, introducing numerous degrees of freedom. This complexity could limit the practicality of the offline MBO approach since there is no opportunity to fine-tune hyperparameters using an oracle function. Simplifying the algorithm and reducing the number of degrees of freedom would make it more practical and applicable in real-world scenarios.
3. It would be beneficial to provide a more comprehensive discussion on ensemble techniques in similar domains, such as offline biosequential design [1,2]. Exploring the applications of ensemble methods in these domains would strengthen the manuscript's relevance and provide valuable insights for researchers in related fields.
[1] Kim, Minsu, et al. "Bootstrapped Training of Score-Conditioned Generator for Offline Design of Biological Sequences." arXiv preprint arXiv:2306.03111 (2023).
[2] Jain, Moksh, et al. "Biological sequence design with gflownets." International Conference on Machine Learning. PMLR, 2022.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Can this method effectively solve high-dimensional biological sequence optimization problems, such as optimizing GFP and UTR sequences?
2. Could you please provide an approximate estimate of the training time required for this method? This information would be valuable for future researchers interested in implementing the approach.
3. Does the performance of the method depend on the number of proxies used?
4. What is the rationale behind using Tri-mentoring? How would leveraging bi-mentoring between the individuals involved affect the outcomes?
5. What are the expected outcomes when incorporating conservatism (e.g., COMs) or a Gaussian prior (e.g., ROMA) into the proxy model and integrating them with the tri-mentoring idea? Can these methods be orthogonal to each other?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Authors already put the limitation part in the paper which is really valuable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## General Reply
We value your constructive feedback and will incorporate it diligently in our revisions. Thank you.
## Weakness
> The analysis lacks consideration for ensemble techniques
We thank you for highlighting the importance of ensemble. In our manuscript, we've employed and discussed two ensemble methods: Deep Ensemble (DE, mean ensemble) and Gradient Boosting (GB, sequentially trains proxies). As noted in Line 236-238, tri-mentoring outperforms DE and GB in all four tasks, demonstrating its robustness by utilizing shared ranking supervision signals.
Further, Appendix A.3 focuses on the accuracy of consensus labels generated by the three-proxy ensemble within our tri-mentoring strategy. A majority of these labels are accurate, offering clear intuition behind the ensemble's efficacy.
> The algorithm appears to be overly complex.
Refer to the global response "On Additional Hyperparameters".
> It would be beneficial to provide a more comprehensive discussion on ensemble techniques in offline biosequential design [1,2].
We value your suggestion on delving deeper into ensemble in offline biosequential design. While our experiments already include two tasks from design-bench: TFB8 and TFB10, to design DNA, we acknowledge the significance of [1,2].
Your referenced works [1,2] also use a trained proxy, which we believe can benefit from our parallel-mentoring. To underscore this, we will enhance our discussion by adding in Line 308: "Our proposed ensemble training process, with its focus on parallel-mentoring, has the potential to improve the proxy/reward training in [1, 2], thereby contributing to advancements in biological sequence design."
## Questions
> Can this method effectively solve high-dimensional biological sequence optimization
Indeed, our approach has been tested and proven effective on these tasks, achieving normalized scores of 0.865 (GFP) and 0.699 (UTR), which are quite competitive compared to performances reported in [3,4].
In the original manuscript, we choose not to include these results as suggested by [5] which states that many methods demonstrate indistinguishable performances on GFP and UTR. However, to highlight our method's versatility, we will add in Line 252: "Our proposed method, also demonstrates its effectiveness on high-dimensional biological sequence design, achieving maximum normalized scores of 0.865 and 0.699 on GFP and UTR respectively."
> Could you please provide an approximate estimate of the training time
Thank you for emphasizing the significance of training time.
To provide an approximate estimate:
- Single proxy training on static dataset, the time (seconds) is:
- TFB8: 101.1 s
- Ant: 33.5 s
This training time is consistent across all methods that utilize a proxy.
As for tri-mentoring, the added overhead primarily stems from proxy fine-tuning. Here's the breakdown:
- TFB8: fine-tuning step is 0.21 s.
- Ant: fine-tuning step is 0.08 s.
These efficient timings, coupled with our method's benefits, make our approach a practical choice for future researchers. We'll incorporate these analyses into Appendix.
> Does the performance of the method depend on the number of proxies used?
Refer to the global response "Regarding Number of Parallel Proxies".
> What is the rationale behind using Tri-mentoring?
The underpinning idea behind tri-mentoring is to harness the voted ranking signal to fine-tune the proxy. This is depicted in Figure 1 of our paper. Here, we maintain three parallel proxies, namely fA, fB, and fC. Suppose we have two designs x1 and x2 near the current point. If proxies fA and fB concur that the score of x1 is larger than that of x2, while fC disagrees, we follow the majority voting principle. The agreement between fA and fB provides a more trustworthy ranking. Their voted ranking signal fV(x1) > fV(x2) is then used to mentor fC, thereby enhancing its performance.
As for bi-mentoring, it's not applicable since the majority voting we leverage necessitates at least three proxies to cast a vote, making it unfit for a bi-mentoring approach.
> What are the expected outcomes when incorporating conservatism (e.g., COMs) or a Gaussian prior (e.g., ROMA)
Thank you for the insightful inquiry on integrating conservatism (COMs) and Gaussian priors (ROMA) with our tri-mentoring.
In brief, our additional experimental results on Ant and TFB8 tasks are:
| Method | Ant | TFB8 |
|--------|------|-----|
| COMs | 0.856 | 0.496 |
| Tri-mentoring | 0.948 | 0.970 |
| COMs + Tri-mentoring | 0.941 | 0.960 |
| ROMA | 0.914 | 0.917 |
| Tri-mentoring | 0.948 | 0.970 |
| ROMA + Tri-mentoring | 0.945 | 0.971 |
While COMs combined with tri-mentoring leads to a slight drop in performance, the ROMA and tri-mentoring combination retains similar outcomes. The dip with COMs might arise from its lower scoring for out-of-distribution designs, potentially conflicting with the tri-mentoring's majority voting mechanism for ood designs.
On the other hand, ROMA, promotes smooth proxy models. Theoretically, this characteristic should be orthogonal to our tri-mentoring. The absence of improvement might suggest that our current benchmark is not sufficiently challenging to exploit the combined strengths of both ROMA and tri-mentoring. We'll ensure to incorporate these analyses into Appendix.
## Overall
Does our response address your concerns? We look forward to continued discussion. Thank you.
[1] Kim, Minsu, et al. Bootstrapped Training of Score-Conditioned Generator for Offline Design of Biological Sequences.
[2] Jain, Moksh, et al. Biological sequence design with gflownets.
[3] Trabucco B, Kumar A, Geng X, et al. Conservative objective models for effective offline model-based optimization.
[4] Chen C, Zhang Y, Fu J, et al. Bidirectional learning for offline infinite-width model-based optimization.
[5] Trabucco B, Geng X, Kumar A, et al. Design-bench: Benchmarks for data-driven offline model-based optimization.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I keep my score supporting this paper to be accepted.
---
Reply to Comment 1.1.1:
Title: Thanks for your continued support.
Comment: Thank you for your continued support and for maintaining your score in favor of accepting our paper. We truly appreciate your time and the insights you provided throughout the review process. | Summary: The paper studies offline model-based optimization, which maximizes a black-box objective with a dataset of designs and scores. Most approaches rely on training a dataset proxy to approximate the black-box objective and perform SGD on the objective. This paper proposes using three proxies and voting-based supervision to generate labels. In addition, they use adaptive soft-labeling to make the proposed method more robust to label noises.
Strengths: + A clear description of the problem background and the proposed algorithm procedure;
+ Extensive ablation studies on various components, such as the voting based pairwise supervision;
Weaknesses: - It's weak and less general that the proposed method constrains itself into the three parallel mentoring cases. I'd suggest the writing changed to a much more general form of any number of the parallel proxies;
- The adaptive soft labeling seems to directly apply the meta-learning method /bi-level optimization method on the pseudo-label/soft label. The novelty seems not very justifiable.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please address the two weaknesses above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## General Reply
We sincerely appreciate the effort and time you've invested in providing us with your insightful and constructive feedback. Your comments are invaluable to us as they offer opportunities for refining, rectifying, and enhancing the content of this paper. We are committed to meticulously revising our paper based on the suggestions you've delineated below.
## Weaknesses
> It's weak and less general that the proposed method constrains itself into the three parallel mentoring cases. I'd suggest the writing changed to a much more general form of any number of the parallel proxies;
We appreciate your feedback concerning the perceived limitation of our method to three parallel mentoring instances. However, it's important to clarify that we **have indeed considered a more general framework in our paper**, extending beyond the three-proxy scenario.
As stated in the paper's appendix under the subsection titled "Scenarios for Parallel Mentoring with Multiple Proxies", we have extended our method to include multiple proxies. The performance is quite robust against the number of parallel proxies.
Furthermore, we **explicitly reference this aspect in three instances within the main body of our paper**:
1. Lines 57-58: "This paper primarily focuses on the three-proxy case, referred to as tri-mentoring, but we also examine the situation with more proxies in Appendix A.1."
2. Lines 106-107: "The method can be easily extended to incorporate more proxies, as discussed in
107 Appendix A.1."
3. Lines 174-175: "While our primary focus is the tri-mentoring with 3 proxies, we additionally explore other parallel-mentoring implementations utilizing 5, 7, 9, and 11 proxies in Appendix A.1."
Therefore, we believe that we have thoroughly addressed your concerns about the general applicability of our method, as described above. We also encourage you to refer to the global response "Regarding Number of Parallel Proxies".
> The adaptive soft labeling seems to directly apply the meta-learning method /bi-level optimization method on the pseudo-label/soft label. The novelty seems not very justifiable.
We appreciate your thoughtful comment regarding the perceived similarity between our proposed adaptive soft labeling approach and existing bi-level optimization methods applied to soft labels[1]. However, our approach carries substantial novelty due to its fundamental differences from previous works [1] that also applied bi-level optimization to optimize soft labels. These differences lie in three key aspects: initialization, optimization, and objective.
1. **Initization**: Contrary to previous works that aim to address label noise in a collected dataset, our method handles label noise created by a unique voting mechanism along the gradient optimization path.
2. **Optimization**: Previous work typically leverages the classification loss of a clean validation set to update the soft label. However, our method proposes an innovative strategy of employing the regression loss of the static dataset for soft label updates. This unique strategy is due to our novel recognition that, despite their differing data distributions, the soft-labels and the static dataset share significant underlying similarities. They both represent the same ground-truth from pairwise and pointwise perspectives, respectively.
3. **Objective**: Whereas previous works primarily focus on mitigating label noise to enhance the performance of a classification model, our work takes a distinctive route by addressing label noise to improve the performance of a regression model for offline MBO. This adaptation to a new context further underscores the innovative aspects of our approach.
We will incorporate these discussions into Appendix.
## Overall
Does the provided response address your concerns? We greatly value your thorough feedback and anticipate further discussion during the rebuttal phase. Thank you.
[1] Wu Y, Shu J, Xie Q, et al. Learning to purify noisy labels via meta soft label corrector. AAAI 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks for pointing out that the more general formulation has been discussed in the Appendix, which has addressed my major concerns.
---
Reply to Comment 1.1.1:
Title: Thanks for your support.
Comment: Thank you for your adjusted rating of 5 and for recommending acceptance. We are pleased to hear that our response in the Appendix has addressed most of your major concerns. We appreciate the time and effort you've put into reviewing our work. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you for your thorough examination of our paper and for sharing insightful feedback. We recognize your concerns and, in this rebuttal, address two main common points raised.
## On Additional Hyperparameters
> review from usfZ: The algorithm appears to be overly complex, introducing numerous degrees of freedom. This complexity could limit the practicality of the offline MBO approach since there is no opportunity to fine-tune hyperparameters using an oracle function. Simplifying the algorithm and reducing the number of degrees of freedom would make it more practical and applicable in real-world scenarios.
> review from NEb9: My primary concern revolves around hyperparameter selection. It appears the method introduces a considerable number of additional hyperparameters, including, but not limited to, learning rates for fine-tuning soft-labels and proxy models, the variance
for selecting an adjacent input from the current solution, and the number of near inputs,. Although the authors demonstrated the stability and effect of these hyperparameters (e.g., ), an analysis of some other hyperparameters, such as or fine-tuning learning rates, is still missing. Given that the paper addresses "offline" MBO, the experiments should detail how these hyperparameters can be determined (e.g.,
might be defined as a function of input dimension or a constant). Alternatively, the authors should demonstrate that the proposed method is robust to variations in these hyperparameters.
We understand the concerns related to the algorithm's complexity and our focus has been on demonstrating its robustness with different hyperparameters. We have four extra introduced parameters: 1. number of samples ($K$), 2. variance $\delta$, 3. fine-tuning learning rate ($\gamma$) and 4. soft-labeling learning rate ($\lambda$). As detailed in Section 4.6 and Appendix A.4, our method remains **robust against the number of Samples ($K$) and variance $\delta$**. As for $\gamma$ and $\lambda$, while we've set default constants (1e-3 for $\gamma$ and 1e-1 for $\lambda$), we've also conducted additional experiments over varied ranges on TFB8 and Ant:
- Evaluated across:
- $\gamma$: [2.5e-4, 5e-4, 1e-3, 2e-3, 4e-3]
- $\lambda$: [0.025, 0.05, 0.1, 0.2, 0.4]
The following results are normalized by dividing them by the result obtained for the default values.
Experimental Results on TFB8
| Parameter | 2.5e-4 | 5e-4 | 1e-3 | 2e-3 | 4e-3 |
|----------|-------|------|------|------|------|
| $\gamma$ | 0.903 | 0.995 | 1.000 | 0.996 | 1.006 |
| $\lambda$ | 0.945 | 0.983 | 1.000 | 1.000 | 0.995 |
Experimental Results on Ant
| Parameter | 2.5e-4 | 5e-4 | 1e-3 | 2e-3 | 4e-3 |
|----------|-------|------|------|------|------|
| $\gamma$ | 0.953 | 1.008 | 1.000 | 1.021 | 1.022 |
| $\lambda$ | 1.011 | 0.989 | 1.000 | 0.992 | 0.997 |
These results confirm the method's **resilience against hyperparameter variations on $\gamma$ and $\lambda$**, emphasizing its potential for real-world scenarios despite its perceived complexity. We'll ensure to incorporate these analyses into Appendix A.4 for better clarity.
## Regarding Number of Parallel Proxies
> review from CizP: It's weak and less general that the proposed method constrains itself into the three parallel mentoring cases. I'd suggest the writing changed to a much more general form of any number of the parallel proxies.
> review from usfZ: Does the performance of the method depend on the number of proxies used?
Our paper indeed places an emphasis on the three-proxy configuration, commonly referred to as "tri-mentoring". However, this focus does not imply a limitation. We have actively pursued a broader framework, one that encompasses multiple proxies.
As stated in the paper's appendix under the subsection titled "Scenarios for Parallel Mentoring with Multiple Proxies", we **have extended our method to include multiple proxies**. Performance remains robust across different numbers of parallel proxies. As the number of proxies (M) increases, the performance ratios for both tasks generally improve, eventually reaching a plateau. This behavior suggests that an increased number of proxies enhances the robustness of the ensemble due to the increased diversity.
Furthermore, we **have explicitly referenced this aspect** in three instances within the main body of our paper:
1. Lines 57-58: "This paper primarily focuses on the three-proxy case, referred to as tri-mentoring, but we also examine the situation with more proxies in Appendix A.1."
2. Lines 106-107: "The method can be easily extended to incorporate more proxies, as discussed in
107 Appendix A.1."
3. Lines 174-175: "While our primary focus is the tri-mentoring with 3 proxies, we additionally explore other parallel-mentoring implementations utilizing 5, 7, 9, and 11 proxies in Appendix A.1."
Best,
Submission 51 Authors. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Offline RL with Discrete Proxy Representations for Generalizability in POMDPs | Accept (poster) | Summary: This paper proposes ORDER, an offline RL solution where the agent have access to states in the offline data, but can only witness observations where some groups of dimensions are masked when deployed due to occlusion or perturbation in real-life scenarios. To address this problem, a 3-step solution is proposed. First, a state encoder is trained to convert the states into discrete representations, which is essentially multiple VQ-VAEs grouped by observability. Then, the RL agent is trained with IQL on the offline dataset with state dimensions randomly blocked, using representations produced by the encoder as observations. Finally, a proxy state encoder is trained to estimate the current discrete representation based on the past trajectory and current state in deploy time, and its output is used for the trained RL agent. On several mujoco testbeds, ORDER is proved to work better than the baselines and gives representation with better alignment to ground truth.
Strengths: 1. This paper is clearly written and easy to follow. Figure 1, 2 and 3 effectively summarizes the main idea of the algorithm, helping the reader to understand. The math symbols are organized and the motivation for each component such as the use of discrete representations are well-explained.
2. The solution proposed by this paper is sound, novel and makes much sense, especially the discrete representation part. From my understanding, there could be two advantages of using discrete representation over continuous: 1) the training of VQ-VAE automatically encourages different values of states to be clustered into one representation, and thus becoming more robust to estimation by the proxy representation, and 2) the RL agent works better on the discrete state space.
3. The experimental results is neat and clearly shows the superiority of ORDER to other baselines; for example, the visualization result in Fig. 6 effectively shows why discrete representation is used.
Weaknesses: **1. The related work section could be improved.**
A) The reference list at a single point is too long (line 53, 55 and 78), while the occurrence of reference is too few in the section. The referenced related work is not (only) a proof that authors have read many papers, but (also) should summarize the prior works with a taxonomy that helps the reader to understand the main lines of work and the position of this work in the context. For example, in line 78, “both empirical and theoretical studies …” should be “both empirical […] and theoretical […] studies …”; in line 93-97, point i) and ii) should each have a reference list; in line 103-104, a list of example should be made about “specific partial observation” with reference list for each kind.
B) There should be a brief discussion on sim2real in the robotics community, since the motivation of the paper is to train the agent in a controlled laboratory environment (quoted from line 34) and delopyed in real-world disaster scenarios (quoted from line 37).
**2. The experiment section could be stronger.**
A) More clarification could be made about the second point in the strength. Specifically, is the performance better because of better representations, or better because IQL works better in the discrete state space?
B) More ablation could be added. For example, what if multiple dimensions are in one group of visibility (I assume the hyperparameters for VQ-VAE would change)? What is the effect of the dimension and the number of discrete codes of the codebook of VQ-VAE? What if the dataset is not medium, but rather a mixture of expert and random?
**3. The computational resource consumption, negative societal impact and licenses are not specified.**
**4. Other minor problems:**
A) Line 285, bseline -> baseline;
B) Line 60 in the appendix, GRU should be cited;
C) I would suggest the authors to give clearer, more intuitive explanation at the beginning of the paper about why discrete rather than continuous representation is used. Currently, the reason for using discrete representation is only stated at the end of introduction, and is only empirical without intuition.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have one question below, apart from those in the weakness section:
From line 196, we know that one encoder only takes one state factor into account; meanwhile, from line 14-15 in the appendix, we know that each dimension is seen as a state factor. Does this mean that the embedding is generated assuming that (groups of) dimensions are independent to each other? If it is, then there is a problem: the agent perceives the same environment, which should not be independent on different sources (with different mask of visibility).
Below are my suggestions for the author. I will be happy to increase my score if the authors address those problems (see weakness sections for details on point 1-4):
1. Improve the related work section;
2. Present more ablation results;
3. Append the computational resource consumption, negative societal impact and licenses;
4. Fix the other minor problems;
5. I strongly advice the authors to open source upon the acceptance (or next submission) of the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does a good job in discussing the limitations, and I am convinced that despite of those limitations, the work is still valuable to the RL community; however, the paper does not mention any potential negative societal impact of the work. For example, while mujoco is still far from real-life applications, works on automated decision making from lab environment to real life is possible to substitute human work and cause job loss.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thank you for your thoughtful questions and feedback on our paper. Below, we provide detailed responses to each of your questions:
# Q1. The embedding is generated assuming that (groups of) dimensions are independent to each other?
While our model considers individual state factors as described in line 196, it doesn't inherently imply independence across these dimensions. Here's why:
- **Embedding Concatenation:** Referencing Equation 2 and Equation 4, the embedding concatenation mechanism amalgamates information across all dimensions. This ensures integrated representation and counters the notion of isolated dimensions.
- **Trajectory Encoder Integration:** As highlighted by Equation 5 and elaborated in line 252, our trajectory encoder, denoted as \(\xi\), incorporates historical data from all dimensions when forming current proxy representations. This further solidifies the interdependence among dimensions.
We hope this provides clarity regarding our model's treatment of state dimensions. The interconnectedness of dimensions, ensured by our design choices, aptly captures the essence of agents perceiving a unified environment, regardless of mask variations.
# Improvements to the Related Work Section
**A. Organization of References:**
We value your constructive feedback on refining our Related Work section. To enhance the readability and utility of our related work section:
1. **Reference Consolidation:** We will reduce long lists of references at a single point, ensuring that they are spread more evenly and are contextually relevant.
2. **Enhanced Summarization:** We aim to provide a more structured summary of prior works, incorporating a taxonomy that facilitates a clearer understanding of the existing literature landscape.
**B. Discussion on Sim2Real:**
The term "sim2real" embodies the transition from models and algorithms optimized in a simulated environment to their application in real-world settings. This transition is particularly vital in robotics and automated systems where real-world conditions can be unpredictable and varied. In the context of our paper, while we predominantly focus on simulated environments, the end goal is to ensure that our findings are not just theoretically sound but also practically viable in real-world scenarios. Understanding and bridging the "sim2real" gap is a crucial step in this direction. We will incorporate a dedicated subsection that discusses prominent contributions from the robotics community on "sim2real". This will better position our work in the broader context and underline its relevance.
# Improvements to the Experiment Section
Thank you for highlighting the need for clarity on the strengths of our discrete representation approach.
The improved performance is attributed to two key factors:
- VQ-VAE's training inherently promotes clustering of diverse state values into a unified representation, enhancing the robustness of the proxy representation.
- A discrete state space, being more constrained than its continuous counterpart, benefits offline RL algorithms. Such a setup increases the likelihood of encountering similar states during training, thereby mitigating the out-of-distribution (o.o.d.) challenge.
We'll ensure that the experiment section further elucidates these distinctions. Your feedback is pivotal in sharpening our paper's clarity, and we're grateful for it.
# Additional Ablation Studies
Due to time constraints and space limitations, we couldn't present new ablation results currently. However, we're committed to providing more comprehensive results in the revision:
1) Ablations for cases where multiple dimensions fall within a single group.
2) Ablation of the dimensionality of the discrete code (note that variations in the number of discrete codes have already been showcased in Figure 6 (d)).
3) Expanded results across different datasets, including expert, medium-replay, and medium-expert datasets.
# Computational Resources and Timeframes
All experiments were executed on Ubuntu 20.04, utilizing an Nvidia Tesla A6000 GPU. On average, training on a dataset required 2732.63 seconds, while inference for one episode (with a maximum of 1000 timesteps) took about 8.17 seconds. We will offer a detailed breakdown of these costs for each task in our revised submission.
# Societal Impact Discussion
While our research is grounded in the controlled Mujoco environment, the implications of transferring these findings to real-world settings merit consideration. Automated decision-making systems, if universally adopted, might reduce human roles in certain sectors, potentially leading to job displacements. It's crucial to strike a balance, ensuring technological advancements align with societal welfare, and to approach real-world deployments with an awareness of these broader implications.
# Addressing Minor Concerns
We'll correct the typo on Line 285 and add a GRU citation on Line 60 in the appendix.
- **Clarification on Discrete Representation**: In the revised version, we'll provide a more intuitive explanation early in the introduction: Imagine the difference between sorting colors into broad categories versus matching exact shades. The former, like a discrete state space, makes generalization easier by grouping similar items together. This is beneficial for offline RL algorithms, as it heightens the chance of re-encountering similar states during training, effectively addressing the out-of-distribution (o.o.d.) challenge inherent in continuous spaces.
# Commitment to Code Release
We will release our implementation after acceptance.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: Thanks for the detailed response; I appreciate the authors' effort to address my concerns, and I think concerns like intuitions of using discrete representation are well-addressed. I would keep my score for now, however, as the ablations are not provided (especially multiple dimensions fall within a single group, which is also requested by other reviewers such as iLBR).
---
Reply to Comment 1.1.1:
Title: A Response for Additional Ablation Studies (1/3)
Comment: Dear Reviewer,
Thank you for your constructive feedback.
In response, we have conducted additional experiments and present new ablation results that further substantiate our findings. We have outlined these results below:
# Ablation for Cases with Multiple Dimensions in a Single Group
In this ablation study, we grouped multiple dimensions into a single entity, using a state factor encoder to process the information within this group. When the remaining number of dimensions is smaller than the specified group size, we treated these dimensions as a separate group, even if their count is less than the group size. Our findings reveal that our model maintains superior performance even when the group size exceeds one.
__Table 1: Generalization performance under dynamic missing scenarios when multiple dimensions fall within a single group with different missing rations $\eta$ (%).__
|Group size| Dataset | Method | $\eta=10\%$ | $\eta=30\%$ | $\eta=50\%$ | $\eta=70\%$ |$\eta=90\%$ |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---: |
2 |halfcheetah-medium-v2| IQL_ORDER| __41.5__| __38.2__|__37.4__|__34.3__|__27.6__|
2 |halfcheetah-medium-v2| IQL_R| 7.5| 6.5|3.4|7.3|2.6|
2 |halfcheetah-medium-v2| IQL_FA| 31.5| 19.5|16.4|10.3|9.6|
2 |halfcheetah-medium-v2| IQL_FZ| 30.5| 16.5|16.4|8.3|6.4|
2 |hopper-medium-v2| IQL_ORDER| __72.5__| __70.1__|__62.2__|__57.4__|__50.6__|
2 |hopper-medium-v2| IQL_R| 20.5| 16.2|12.4|17.3|4.9|
2 |hopper-medium-v2| IQL_FA| 40.2| 36.2|22.4|19.3|7.4|
2 |hopper-medium-v2| IQL_FZ| 19.5| 16.3|12.4|11.3|6.7|
2 |walker2d-medium-v2| IQL_ORDER| __53.4__| __52.5__|__43.4__|__30.2__|__11.6__|
2 |walker2d-medium-v2| IQL_R| 5.5| 6.2|4.3|7.2|2.6|
2 |walker2d-medium-v2| IQL_FA| 50.2| 46.3|32.1|__30.7__|9.6|
2 |walker2d-medium-v2| IQL_FZ| 10.4| 6.5|6.4|7.3|4.2|
|Group size| Dataset | Method | $\eta=10\%$ | $\eta=30\%$ | $\eta=50\%$ | $\eta=70\%$ |$\eta=90\%$ |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---: |
3 |halfcheetah-medium-v2| IQL_ORDER| __40.2__| __36.2__|__32.2__|__24.0__|__21.3__|
3 |halfcheetah-medium-v2| IQL_R| 8.4| 4.2|3.7|2.4|2.0|
3 |halfcheetah-medium-v2| IQL_FA| 31.5| 17.5|13.4|12.2|8.7|
3 |halfcheetah-medium-v2| IQL_FZ| 21.3| 16.6|13.5|12.3|6.3|
3 |hopper-medium-v2| IQL_ORDER| __63.1__| __60.2__|__52.4__|__47.3__|__40.5__|
3 |hopper-medium-v2| IQL_R| 12.3| 11.3|5.3|7.0|4.7|
3 |hopper-medium-v2| IQL_FA| 37.3| 32.1|21.3|14.3|4.4|
3 |hopper-medium-v2| IQL_FZ| 19.3| 12.1|12.0|11.0|3.3|
3 |walker2d-medium-v2| IQL_ORDER| __52.3__| __42.1__|__33.2__|__27.2__|__10.2__|
3 |walker2d-medium-v2| IQL_R| 4.3| 2.1|4.2|4.0|2.1|
3 |walker2d-medium-v2| IQL_FA| 34.2| 32.1|23.4|21.7|9.4|
3 |walker2d-medium-v2| IQL_FZ| 10.9| 4.1|5.1|7.1|4.9|
__Table 2: Generalization performance under factor reduction scenarios when multiple dimensions fall within a single group with different missing rations $\eta$ (%).__
|Group size| Dataset | Method | $\eta=10\%$ | $\eta=30\%$ | $\eta=50\%$ | $\eta=70\%$ |$\eta=90\%$ |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---: |
2 |halfcheetah-medium-v2| IQL_ORDER| __41.3__| __37.5__|__31.4__|__21.4__|__11.6__|
2 |halfcheetah-medium-v2| IQL_R| 10.3| 6.2|4.7|7.8|6.5|
2 |halfcheetah-medium-v2| IQL_FZ| 10.9| 6.7|2.1|2.3|2.8|
2 |hopper-medium-v2| IQL_ORDER| __65.5__| __66.1__|__41.3__|__21.2__|__13.2__|
2 |hopper-medium-v2| IQL_R| 7.5| 6.3|4.9|7.0|2.1|
2 |hopper-medium-v2| IQL_FZ| 12.5| 6.2|5.4|7.1|6.9|
2 |walker2d-medium-v2| IQL_ORDER| __53.6__| __26.3__|__22.1__|__14.2__|__6.3__|
2 |walker2d-medium-v2| IQL_R| 3.5| 7.4|2.3|2.1|6.7|
2 |walker2d-medium-v2| IQL_FZ| 43.5| 21.3|12.0|7.3|8.7|
|Group size| Dataset | Method | $\eta=10\%$ | $\eta=30\%$ | $\eta=50\%$ | $\eta=70\%$ |$\eta=90\%$ |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---: |
3 |halfcheetah-medium-v2| IQL_ORDER| __41.7__| __32.1__|__29.1__|__21.7__|__10.7__|
3 |halfcheetah-medium-v2| IQL_R| 10.9| 8.1|7.9|7.1|5.4|
3 |halfcheetah-medium-v2| IQL_FZ| 10.1| 6.9|3.2|3.4|1.9|
3 |hopper-medium-v2| IQL_ORDER| __63.1__| __52.2__|__42.1__|__32.1__|__13.1__|
3 |hopper-medium-v2| IQL_R| 7.4| 5.2|4.0|9.5|3.2|
3 |hopper-medium-v2| IQL_FZ| 15.1| 6.1|3.2|8.2|7.3|
3 |walker2d-medium-v2| IQL_ORDER| __54.2__| __27.1__|__23.2__|__12.1__|__7.4__|
3 |walker2d-medium-v2| IQL_R| 3.8| 7.9|2.9|2.2|2.0|
3 |walker2d-medium-v2| IQL_FZ| 41.0| 22.4|22.1|21.4|23.7|
---
Reply to Comment 1.1.2:
Title: A Response for Additional Ablation Studies (2/3)
Comment: # Ablation of the Discrete Code Dimensionality
We have also explored how changes to the dimension or number of discrete codes impact our model's performance. We present the averaged scores of our model in the 'hopper-medium-v2' dataset as evidence. Our results show that increasing the number of discrete codes generally leads to improved model performance, likely due to an enhanced expressive capacity. However, beyond 40 codes, the gains in performance become negligible. This suggests an optimal trade-off between the number of discrete codes and model performance. Additionally, our experiments show that a code dimension of 2 yields excellent performance, but further increasing this parameter does not yield significant improvement.
__Table 3: Ablation results. Averaged score of different number of discrete codes in the hopper-medium-v2 dataset.__
|# discrete codes| Averaged score|
| :---: | :---: |
|10|40.2|
|20|56.3|
|30|49.5|
|40|60.2|
|50|62.3|
|60|62.4|
__Table 4: Ablation results. Averaged score of different codebook dimension in the hopper-medium-v2 dataset.__
|# codebook dimension| Averaged score|
| :---: | :---: |
|1|55.3|
|2|60.2|
|4|60.7|
|8|59.2|
|16|56.9|
|32|58.2|
# Expanded Results Across Different Datasets
To further validate our model’s generalizability, we have extended our experiments to include additional datasets—namely 'expert', 'medium-replay', and 'medium-expert'. Our extended results confirm that, in most cases, our model outperforms other baseline approaches across these varied datasets.
__Table 5: Generalization performance of different methods under dynamic missing scenarios with different missing rations $\eta$ (%).__
| Dataset | Method | $\eta=10\%$ | $\eta=30\%$ | $\eta=50\%$ | $\eta=70\%$ |$\eta=90\%$ |
| :------------------------: | :----------: | :---: | :---: | :---: | :---: |:---: |
|halfcheetah-medium-expert-v2| IQL_ORDER| 80.7| __76.3__|__70.4__|__67.3__|__60.6__|
|halfcheetah-medium-expert-v2| IQL_R| 22.4| 20.3|17.4|15.6|13.2|
|halfcheetah-medium-expert-v2| IQL_FA| __80.9__| 63.3|53.5|30.2|20.4|
|halfcheetah-medium-expert-v2| IQL_FZ| 77.3| 52.1|32.2|21.2|18.4|
|halfcheetah-medium-replay-v2| IQL_ORDER| __40.2__| __38.1__|__33.2__|__25.6__|__20.2__|
|halfcheetah-medium-replay-v2| IQL_R| 10.2| 7.1|5.2|6.1|9.0|
|halfcheetah-medium-replay-v2| IQL_FA| 37.2| 33.2|26.2|18.4|10.5|
|halfcheetah-medium-replay-v2| IQL_FZ| 38.6| 23.5|15.2|12.4|8.3|
|halfcheetah-expert-v2| IQL_ORDER| __90.8__| __83.5__|__80.2__|__67.3__|__52.1__|
|halfcheetah-expert-v2| IQL_R| 26.4| 22.5|17.2|19.6|15.7|
|halfcheetah-expert-v2| IQL_FA| 88.3|80.4|75.1| 65.0|40.9|
|halfcheetah-expert-v2| IQL_FZ| 80.3|78.2|71.2| 60.0|42.1|
| Dataset | Method | $\eta=10\%$ | $\eta=30\%$ | $\eta=50\%$ | $\eta=70\%$ |$\eta=90\%$ |
| :------------------------: | :----------: | :---: | :---: | :---: | :---: |:---: |
|hopper-medium-expert-v2| IQL_ORDER| __87.7__| __77.2__|__70.0__|__65.2__|__61.0__|
|hopper-medium-expert-v2| IQL_R| 12.4| 20.5|10.4|9.7|12.3|
|hopper-medium-expert-v2| IQL_FA| 86.7| 63.6|57.3|35.3|23.8|
|hopper-medium-expert-v2| IQL_FZ| 85.3| 55.3|36.4|26.2|22.6|
|hopper-medium-replay-v2| IQL_ORDER| __90.2__| __88.3__|__83.6__|__65.6__|__50.2__|
|hopper-medium-replay-v2| IQL_R| 17.4| 13.5|13.4|10.0|9.5|
|hopper-medium-replay-v2| IQL_FA| 88.8| 83.6|77.5|55.3|23.8|
|hopper-medium-replay-v2| IQL_FZ| 86.8| 73.6|67.4|35.3|18.2|
|hopper-expert-v2| IQL_ORDER| __91.8__| __84.3__|__75.2__|__66.2__|__55.7__|
|hopper-expert-v2| IQL_R| 15.4| 12.5|14.7|8.3|8.5|
|hopper-expert-v2| IQL_FA| 89.9| 80.6|72.0|51.2|34.2|
|hopper-expert-v2| IQL_FZ| 88.3| 78.4|64.2|41.3|20.6|
| Dataset | Method | $\eta=10\%$ | $\eta=30\%$ | $\eta=50\%$ | $\eta=70\%$ |$\eta=90\%$ |
| :------------------------: | :----------: | :---: | :---: | :---: | :---: |:---: |
|walker2d-medium-expert-v2| IQL_ORDER| __100.2__| __96.5__|__82.4__|__77.5__|__66.6__|
|walker2d-medium-expert-v2| IQL_R| 31.5| 24.3|14.4|25.4|23.1|
|walker2d-medium-expert-v2| IQL_FA| 95.9| 83.3|63.5|50.6|40.4|
|walker2d-medium-expert-v2| IQL_FZ| 97.3| 62.1|52.4|46.2|28.3|
|walker2d-medium-replay-v2| IQL_ORDER| 66.1| __58.2__|__53.2__|__45.6__|__30.1__|
|walker2d-medium-replay-v2| IQL_R| 8.2| 9.1|5.6|7.3|6.0|
|walker2d-medium-replay-v2| IQL_FA| __67.2__| 43.1|36.3|28.4|10.0|
|walker2d-medium-replay-v2| IQL_FZ| 66.6| 33.0|25.2|22.7|12.5|
|walker2d-expert-v2| IQL_ORDER| __106.2__| __93.3__|__78.2__|__73.2__|__60.4__|
|walker2d-expert-v2| IQL_R| 46.4| 42.3|37.1|29.5|25.7|
|walker2d-expert-v2| IQL_FA| 98.3|90.4|55.1| 35.2|30.5|
|walker2d-expert-v2| IQL_FZ| 100.3|78.4|37.2| 33.0|22.1|
---
Reply to Comment 1.1.3:
Title: A Response for Additional Ablation Studies (3/3)
Comment: __Table 6: Generalization performance of different methods under factor reduction scenarios with different missing rations $\eta$ (%).__
| Dataset | Method | $\eta=10\%$ | $\eta=30\%$ | $\eta=50\%$ | $\eta=70\%$ |$\eta=90\%$ |
| :------------------------: | :----------: | :---: | :---: | :---: | :---: |:---: |
|halfcheetah-medium-expert-v2| IQL_ORDER| __82.1__| __77.3__|__55.4__|__27.3__|__20.5__|
|halfcheetah-medium-expert-v2| IQL_R| 20.0| 18.3|17.2|10.6|13.9|
|halfcheetah-medium-expert-v2| IQL_FZ| 75.5| 30.4|22.2|20.2|14.3|
|halfcheetah-medium-replay-v2| IQL_ORDER| __39.4__| __35.1__|__21.5__|__15.3__|__10.2__|
|halfcheetah-medium-replay-v2| IQL_R| 10.0| 11.3|6.3|9.1|10.2|
|halfcheetah-medium-replay-v2| IQL_FZ| 33.6| 23.3|15.4|10.4|11.3|
|halfcheetah-expert-v2| IQL_ORDER| __89.5__| __83.3__|__67.2__|__47.3__|__32.0__|
|halfcheetah-expert-v2| IQL_R| 19.4| 12.4|10.3|10.3|13.5|
|halfcheetah-expert-v2| IQL_FZ| 80.5|48.1|31.3| 20.0|22.3|
| Dataset | Method | $\eta=10\%$ | $\eta=30\%$ | $\eta=50\%$ | $\eta=70\%$ |$\eta=90\%$ |
| :------------------------: | :----------: | :---: | :---: | :---: | :---: |:---: |
|hopper-medium-expert-v2| IQL_ORDER| __86.4__| __73.1__|__50.4__|__35.1__|11.0|
|hopper-medium-expert-v2| IQL_R| 12.4| 20.5|10.4|9.7|9.3|
|hopper-medium-expert-v2| IQL_FZ| 85.3| 25.3|16.3|16.2|__12.5__|
|hopper-medium-replay-v2| IQL_ORDER| __91.7__| __78.2__|__53.2__|__35.3__|__20.1__|
|hopper-medium-replay-v2| IQL_R| 19.4| 14.5|12.3|12.0|10.3|
|hopper-medium-replay-v2| IQL_FZ| 89.8| 33.6|17.3|15.2|13.5|
|hopper-expert-v2| IQL_ORDER| __90.3__| __74.2__|__55.1__|__26.1__|__15.5__|
|hopper-expert-v2| IQL_R| 12.4| 14.5|18.2|8.3|11.3|
|hopper-expert-v2| IQL_FZ| 84.3| 22.4|14.1|11.3|10.9|
| Dataset | Method | $\eta=10\%$ | $\eta=30\%$ | $\eta=50\%$ | $\eta=70\%$ |$\eta=90\%$ |
| :------------------------: | :----------: | :---: | :---: | :---: | :---: |:---: |
|walker2d-medium-expert-v2| IQL_ORDER| __100.5__| __76.5__|__42.4__|__27.3__|__6.6__|
|walker2d-medium-expert-v2| IQL_R| 11.5| 4.3|4.5|5.9|3.1|
|walker2d-medium-expert-v2| IQL_FZ| 97.3| 62.1|52.4|46.2|28.3|
|walker2d-medium-replay-v2| IQL_ORDER| __63.1__| __50.2__|__33.1__|__25.9__|__10.1__|
|walker2d-medium-replay-v2| IQL_R| 8.1| 8.5|10.6|4.2|8.0|
|walker2d-medium-replay-v2| IQL_FZ| 62.6| 23.1|15.3|12.2|8.5|
|walker2d-expert-v2| IQL_ORDER| __103.2__| __73.3__|__48.1__|__23.2__|5.4|
|walker2d-expert-v2| IQL_R| 16.4| 14.4|17.3|9.2|6.7|
|walker2d-expert-v2| IQL_FZ| 81.2|58.4|27.1| 13.0|__12.2__|
We hope these additional experiments and clarifications address your concerns and demonstrate the robustness and efficacy of our approach.
Thank you once again for your valuable insights.
Sincerely,
Authors | Summary: The work looks to tackle a subset of the partially observable offline reinforcement learning problem setting, where a dataset of offline experience (of full states and masked states) is given during training, and an agent is tested on masked state features during test time. The authors propose the ORDER training framework, where during training time, an agent first learns discrete state representations using the full states, then uses an RNN to learn (in a supervised manner from the discrete representations from the full states) the discrete state representations over the masked states.
With this framework in place, the authors show that this approach does well in a masked version of the D4RL offline reinforcement learning benchmark, which they create themselves. They show this form of discretization outperforms baselines algorithm performs better on this benchmark as compared to other baselines. They also show an improvement as a higher percentage of the states are masked out.
Strengths: Within this specific problem setting, the authors do a good job of showing the benefits of their framework and of discretization. It seems that with this problem set up, discretization and supervised learning over masked states helps quite a bit with performance, and is also quite robust to the percentage of masking based on the plots shown in section 5.
Weaknesses: There are a few important issues that this paper seems to have, which I’ll describe in broad strokes in this section. For more specific details on these issues, please see the section-by-section review.
The first big issue is that it seems that the work has mixed their problem setting with their solution method. The two seem innately tied - even the structure of the work mixes the two together. While the authors start the work with describing the general problem setting of partial observability in offline reinforcement learning, the work continues on to describe a problem setting that is quite far off that mark - where the partial observability only stems from simple bernoulli masks, and full state information is given during the training phase. Besides this, additional assumptions are also given to the agent, such as the agent actually knowing *****which***** state features are masked when (which means that **************************************************************************the agent has to know what’s wrong with the state features at every step)**************************************************************************. All of these assumptions on the problem setting are necessary for ORDER to work. While I’m not saying there’s anything fundamentally wrong with the problem setting they describe, **the issue arises with the fact that the proposed solution method, based on the evidence presented in this work, seems to only work in this extremely specific problem setting that they’ve introduced.**
The language used and structure throughout the work are also quite misleading, especially in the introduction and abstract where it seems the work has overclaimed in the introduction and underdelivered from section 3 onwards. It feels like before reading section 3, you describe your method as general with the ability to tackle POMDPs in an offline setting. But after reading your preliminary section, you seem to reduce the scope suddenly (masked observation functions, different observation functions, access to states during training). I would recommend adding specificity to the problem setting that you describe throughout your work.
Besides this, I find it somewhat suspicious that IQL_R doesn’t seem to work in any of the environments/datasets, and shows the same flat line throughout all environments/datasets. This calls into question implementation or how hyperparameter tuning was done, which was lacking in the appendix.
Lastly, I’m also surprised you didn’t reference hindsight state information, seeing as their problem setup is similar to yours. It seems that this setup is more similar to what the authors are trying to tackle.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Here is a central question I would like answered with regards to the work: From Appendix B, “Following this, we employ a grid search strategy to finalize our choice of hyper-parameters, the details of which are reported in Table 5.” What was this grid search strategy? How did you arrive at these hyperparameters?
Besides this question, here is a section-by-section review/questions with regard to the work:
****1.****
“While POMDP methods excel in online RL, they rely on continuous environment interactions for policy adaptation, making them unsuitable for offline RL” - This statement confused me a bit. Do you mean that you need on-policy samples for POMDP methods?
“we aim to tackle a more general POMDP problem” - This is false. Your problem set up is not more general, seeing as you have access to states in training. The most general class of POMDP problem settings are settings where you don’t have access to state at all. This setting encompasses “POMDP families” by simply expanding the state space instead of saying that there are multiple observation functions.
This formulation and claim that the authors are making about tackling a “more general POMDP problem” is also **untrue** in your problem setting. There aren’t different observation functions between train and test time. You simply use a stochastic observation function.
****3.****
“In particular, during the training period, the policy has access to the underlying MDP” - I’m assuming you mean *******samples which include state information*******? If you have the underlying MDP then why not do online RL? Please be more precise.
****4.****
“By converting unseen partial observations into discrete forms akin to those present in the training data”
**Is all these machine learning machinery necessary to discretize states?** We’ve had tile coding and different, very effective, state discretization schemes for a while now. Why do we need a VQ-VAE?
“And we introduce partial observability by randomly setting a portion of mask variables m^i_t to be 0, simulating real-world partially observable scenarios.” - this entire problem set up is a big red flag. **The problem setup is not a realistic representation of partial observability ******at all********. From reading this paper, it sounds like you’ve essentially worked backwards, where you’ve defined a constrained solution method first, then defined your problem setting based on this solution method. It seems like your solution method essentially hinges on the fact that your partial observations are of the form of *************independently************* masking your state feature vectors, which is almost never the case in the real world.
Do I understand this correctly that ************************************************************************************************************************the agent is given the masking vector M?********************************************************************************* If this is the case, then this is another huge assumption. The LSTM here is essentially just predicting missing state features, knowing which are missing.
**4.2.2**
What is g here?
****5.****
“Our results clearly indicate that ORDER substantially enhances the generalization performance of policies trained on offline datasets in diverse partially observable conditions.” This is claim is tenuous at best. How is this one partial observability setting considered “diverse observable conditions?”
Looking at your appendix, I’m not sure how you’ve swept hyperparameters and arrived at the hyperparameters given for the baselines. Most importantly, the IQL_R baseline. I’m also unsure as to what IQL_R is exactly. Is it simply IQL with an RNN trained over the masked D4RL datasets?
******5.1******
“An intrinsic characteristic of a policy with strong generalization ability is its ability to maintain performance as the missing ratio increases.” - what does strong generalization performance mean? It seems you mean “generalization in terms of number of masked features”, but more specificity in this language is needed, seeing as “generalization” means a whole deluge of different things in the reinforcement learning context. **This seems to also be an issue throughout this work, seeing the context in which generalization is used.**
“This suggests that it struggles to develop effective policies under diverse and dynamic partial observation settings.” - This is markedly untrue in general - it might be the case where **************************************specifically in your problem setting that you invented************************************** this might be true, but I’m also unconvinced this is the case currently as I’m unsure of how you’ve swept and decided on hyperparameters.
******5.2******
Is using the overlap in t-SNE the best idea for seeing overlap in representations? It’s a dimensionality reduction technique, so the overlap doesn’t necessarily mean much. Why not just use a cosine similarity metric instead? It would be better to have numbers to show the “similarities” in the representations.
****6.****
“that addresses the challenges of partial observability in real-world scenarios.” - **this is just flat-out untrue. Mujoco with random Bernoulli masks is far from real-world scenarios.**
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: Please see weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thank you for your thoughtful questions and feedback on our paper. Below, we provide detailed responses to each of your questions:
# Clarifying Our Problem Setting and Its Real-World Relevance
**Real-world Motivations:** Our specific assumptions, though tailored, are rooted in real scenarios where observation functions are diverse and often unpredictable. Our goal is to bridge this gap with robust solutions.
- **Search and Rescue:** A robot, trained on complete offline data in controlled conditions, may face varied challenges when deployed in different disaster zones. Factors like smoke, murky waters, or sensor noise introduce unpredictable observation functions, emphasizing the need for our approach.
- **Financial Markets:** Algorithms trained on comprehensive historical data can encounter diverse market conditions where only specific subsets of indicators are reliable. The ever-changing nature of markets results in unpredictable observation functions, further underscoring our approach's relevance.
**Assumption Specificity:** We recognize that our assumptions represent distinct scenarios but stand firm in their practical significance:
- **Full State Information during Training:** Though this assumption is seemingly idealistic, asymmetric RL for POMDPs [1-2] and RL for hindsight observable MDPs [3] have effectively leveraged this for addressing challenges, e.g., self-driving vehicles [1].
- **Employing Masks for Simulating Diverse Observation Functions:**
1. Masks effectively capture some real-world partial observable scenarios, such as robotics where sensor occlusions can be modeled as feature masks.
2. Traditional POMDPs, as described in [3-6], utilize a consistent observation function that maps the full state to a partial one, often masking a predetermined set of factors. In contrast, our approach introduces unpredictability by avoiding assumptions about which factors are masked during testing. This results in diverse and unpredictable mapping functions during evaluation.
3. To represent this diversity, we employ two specific scenarios (refer to lines 235-241). It's essential to note that **our generated observation functions are not equivalent to a straightforward stochastic Bernoulli function.** Our practical method of sampling multiple mask observation functions is explained in lines 74-80 of the Appendix. The complex interplay of differing missing ratios, distinct scenarios, and variable masked sections surpasses what can be captured by a singular Bernoulli distribution, highlighting the need for robust generalization.
- **Awareness of Mask Vectors:** While we recognize that our assumption may not hold universally, it's pragmatically valid in many contexts and aligns with existing research conventions.
1. Practical domains like robotics and network systems frequently allow the detection of 'missing' or anomalous data, evidenced by discernible sensor disturbances or notable anomaly indicators.
2. Our approach resonates with numerous established works in the POMDP realm [4] and studies addressing data missingness [7-8], which operate under similar assumptions regarding the awareness of mask vectors.
# Addressing Concerns in Method Description: Proposed Revisions
Thank you for feedback. We are committed to refining our paper:
1. We acknowledge discrepancies between our introduction, abstract, and Section 3. We will harmonize these parts to clearly depict our method's capabilities and constraints, ensuring the work's scope is evident from the start.
2. We'll clarify the rationale behind our chosen test settings, including our decision to use masked observation functions, the diversity of observation functions, and the accessibility to states during training (as elaborated in the above response).
3. For overarching clarity, we will comb through the manuscript ensuring the problem, method, and results' scope are consistently and lucidly presented.
# Addressing IQL_R Performance and Hyperparameter Tuning Concerns
We have provided a more detailed explanation, along with supplementary results and hyper-parameter selection tables, in the accompanying pdf file in the global response panel.
# Distinguishing Our Method from Hindsight State Information [3]
While there are similarities, key distinctions also exist. Our method investigates uncertain and diverse mask observation functions in an offline mode, in contrast to the online, singular observation function in hindsight state information. For clarity, we've summarized these differences in the Table 3 in the attached pdf:
# Other Queries:
- **Online POMDP in Offline RL:** Online RL is optimized for live interactions. Without adjustments, their direct application to offline RL can lead to poor performance [9].
- **Access to MDP:** None. This will be clarified in the revision.
- **Use of VQ-VAE:** VQ-VAE stands as SoTA in discrete representation learning, justifying its incorporation.
- **'g' Defined:** Refers to the nearest lookup, as noted on line 198.
- **Choosing t-SNE:** According to [10], t-SNE ensures that similar objects align closely, while dissimilar ones are positioned farther apart, making it suitable for visual similarity comparisons.
1. Robust asymmetric learning in pomdps. ICML 2021.
2. Leveraging fully observable policies for learning under partial observability. CoRL 2022.
3. Learning in POMDPs is Sample-Efficient with Hindsight Observability. ICML 2023.
4. Deep recurrent belief propagation network for POMDPs. AAAI 2021.
5. Deep variational reinforcement learning for pomdps. ICML 2018.
6. Recurrent Model-Free RL Can Be a Strong Baseline for Many POMDPs. ICML 2022.
7. GAIN: Missing Data Imputation using Generative Adversarial Nets. ICML 2018.
8. Variational Selective Autoencoder: Learning from Partially-Observed Heterogeneous Data. AISTATS 2021.
9. Off-Policy Deep Reinforcement Learning without Exploration. ICML 2019.
10. Visualizing Data Using t-SNE. 2008
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the response. Although a few of my concerns have been addressed, my main concernshave still not been fully alleviated.
**"Traditional POMDPs, as described in [3-6], utilize a consistent observation function that maps the full state to a partial one, often masking a predetermined set of factor"**
While there is one work listed that does use random masking (4 in your given list), the other works introduce POMDPs that don't use masking as you've described it (or even random masking!).
Looking at classic work in the POMDP literature [1, 2], the partial observability present is much more complex in its nature. My main concern with this problem setting is the extensibility of this proposed solution method, especially since it relies so heavily on the masking assumptions made by the problem setting.
**"We will harmonize these parts to clearly depict our method's capabilities and constraints, ensuring the work's scope is evident from the start."**
Could you be more specific with what you'll be changing? My biggest issue with this point was the overclaiming in the first few sections of the work. What are you going to do to address these concerns of overclaiming?
To conclude, while I do appreciate the effort in "ensuring the work's scope is evident from the start", my concerns with the problem setting are still largely unaddressed. I do not believe the problem setting is well motivated enough in its current form.
[1] Leslie Pack Kaelbling, Michael L. Littman, Anthony R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, Volume 101, Issues 1–2, 1998. Pages 99-134.
[2] Michael L. Littman, Anthony R. Cassandra, Leslie Pack Kaelbling. Learning policies for partially observable environments: Scaling up. Armand Prieditis, Stuart Russell. Machine Learning Proceedings 1995. Pages 362-370.
---
Reply to Comment 1.1.1:
Title: Responses for addressing your concerns ( Extensibility of Problem Setting )
Comment: Dear reviewer,
Thank you for your timely and valuable feedback. I appreciate the opportunity to clarify the key aspects of our work.
**Understanding of Problem Setting:** Firstly, I would like to clarify that our setting is designed to be versatile, not restricted to __explicit__ mask observation functions. Any partial observation scenario where certain state information is unobserved can be interpreted within our mask observation framework. We'll provide clarifying examples subsequently.
**Relation to Classic POMDP Literature:** We recognize that our mask observation functions may not encompass all forms of partial observability, such as state perturbations or noises cited in classic POMDP works. However, we assert that our approach effectively models scenarios where certain state information is observable while others are not, which is a widespread and realistic form of partial observability.
**Extensibility of Our Setting:** Our approach is crafted for broad applicability. It accommodates a wide array of mask observation functions, as identified in existing literature. To further substantiate the flexibility and relevance of our method, we will provide examples demonstrating its alignment with various established works.
- **Example-1: Revisiting Frozen Lake** Referring to Section 6 of [1], the agent in the "Revisiting Frozen Lake" environment cannot observe the hazard's position. In our approach, the mask vector for the hazard is consistently set to 1, while other positions are set to 0. This implies the agent observes everything except the hidden hazard.
- **Example-2: Safe Autonomous Vehicle Learning** Section 6 of [1] mentions that the "Safe Autonomous Vehicle Learning" environment's partial observability arises when the field of view is obstructed. In our model, obstructions are represented by setting the respective mask vector to 1.
- **Example-3: LunarLander-P, LunarLander-V** In Section 5.1 of [2], the authors directly state:
>"Masking parts of the state to turn MDPs into POMDPs is common in previous work [34, 3, 35, 5], the agent only observes subsets of the full state."
This statement is in alignment with our approach, where unobserved state information is represented by setting corresponding mask vectors to 1.
- **Example-4: Car-Flag** In Section 5.1 of [2], agents usually observe the car's position and velocity. However, proximity to the blue flag lets them observe the green flag's side (left/right). Our framework defaults the flag's side mask value to 1, changing it to 0 when observable.
- **Example-5: 8 OpenAI Gym Locomotion Tasks** The "Experiments" section of [4] introduces partial observability to eight tasks by directly masking specific state information. These tasks include _HalfCheetah_, _Hopper_, _Ant_, _InvertedDoublePendulum_, _InvertedPendulum_, _Swimmer, Reacher, and Walker2d.
- **Example-6: Flickering Atari** Section 5.2 of [5] employs the "Flickering Atari" as a POMDP benchmark, presenting a blank screen for half of the observations. Our methodology aligns perfectly with this: at each time step, our mask vector has a 50% chance of being set to 1, symbolizing a fully unobserved state.
- **Example-7: Occlusion Benchmark in 8 'Standard POMDP' Environments** In subsection "Standard POMDP" of Section 5.1 of [6], VRM proposes the Occlusion Benchmark, comprising eight environments: _Hopper-P_, _Ant-P_, _Walker-P_, _Cheetah-P_, _Hopper-V_, _Ant-V_, _Walker-V_, _Cheetah-V_. Here, “-P” denotes observations of positions and angles only, and “-V” denotes observations of velocities only. In line with our approach, the mask vector for observable state information is set to 0, and for unobservable information, it is set to 1.
- **Example-8: 4 'Meta RL' Environments** In subsection "Meta RL" of Section 5.1 of [6], environments _Semi-Circle_, _Wind_, _Cheetah-Dir_, and _Ant-Dir_ feature POMDPs where certain parameters in rewards or dynamics vary between episodes but remain constant within a single episode. In our framework, the mask vectors for these varying parameters are consistently set to 1, indicating that these parameters are not observable to the agent.
- **Example-9:3 'Robust RL' Environments** In subsection "Robust RL" of Section 5.1 of [6], the environments _Cheetah-Robust_, _Hopper-Robust_, and _Walker-Robust_ are described. In these scenarios, certain hidden states, such as the density and friction coefficients of simulated robots, remain fixed throughout an episode. To simulate this partial observability, our approach involves masking these coefficients by setting their corresponding mask vectors to 1.
---
---
Reply to Comment 1.1.2:
Title: Responses for addressing your concerns ( Revisions for Ensuring the Work's Scope is Evident from the Start )
Comment: We understand the importance of precisely defining the scope and contributions of our work, and we are committed to addressing this in the revised manuscript. Here is a detailed breakdown of the specific changes we plan to make:
1. **Abstract Revision**:
We will revise the abstract to highlight the two key factors of our method:
- i) learning from __full observations__ during offline training, and
- ii) its designed application towards __masked partial observabilities__.
**Revised Abstract:**
> ...which brings crucial challenges of the deployment of offline RL methods: i) the policy trained on data with __full observability__ is not robust against the __masked partial observability__ during execution, and ii) the modality of the __masked partial observability__ is usually unknown during training. In order to address these challenges, we present Offline RL with Discrete pRoxy rEpresentations (ORDER), a probabilistic framework which leverages novel state representations to improve the robustness against diverse __masked partial observabilities__. Specifically, we propose a discrete representation of the states and use a proxy representation to recover the states from __masked partial observable trajectories__. The training of ORDER can be compactly described as the following three steps. i) Learning the discrete state representations on data with __full observations__, ii) Training the decision module based on the discrete representations, and iii) Training the proxy discrete representations on the data with various __masked partial observations__, aligning with the discrete representations. We conduct extensive experiments to evaluate ORDER, showcasing its effectiveness in offline RL for diverse __masked partially observable scenarios__ and highlighting the significance of discrete proxy representations in improved generalization performance.
ORDER is a flexible framework to employ any state-of-the-art offline RL algorithms and we hope that ORDER can pave the way for the deployment of RL policy against various __masked partial observabilities__ in the real world.
**2. Introduction Revision:**
- **Adding a Statement for Defining the Scope of Our Specific Problem Setting**
Revised statement to be inserted at the start of line 43 of the introduction section:
> Guided by this motivation, our work targets the specific challenge where, although complete state information is available from an offline dataset, the deployment stage might face various masked partial observations in which certain state information remains hidden, while other parts are visible
- **Highlighting the Limitations**
Revised limitation statement to be added in line 72:
> Nevertheless, it is important to acknowledge existing limitations. ORDER is presently tailored to address masked partial observation functions and does not extend to other forms, such as perturbations and noises. Additionally, the framework’s efficacy in complex real-world applications, like autonomous driving, warrants further investigation. Regardless, ORDER is a flexible framework...
**3. Related Works Revision:**
- **Discussion on Full State Information Access During Training:**
Insert at line 106 of the related works section:
> "While it may appear idealistic to assume access to full state information during training, asymmetric RL for POMDPs [1-2] and RL for hindsight observable MDPs [3] have employed this assumption effectively to address various challenges, including self-driving vehicles [1]."
- **Discussion on Masked Observation Functions:**
Insert at the end of the related works section:
> "Masking is a practical representation of partial observability in real-world scenarios, such as robotics, where sensor occlusions can be interpreted as feature masks. Traditional POMDPs [3-6] typically employ a consistent observation function, mapping full states to partial ones based on predetermined factors. Unlike these approaches, our method introduces variability, as it avoids making specific assumptions about which factors will be masked during testing, leading to more dynamic and unpredictable mapping functions during evaluation."
- **Discussion on Awareness of Missing Vectors:**
Insert at the end of the related works section:
> "Our assumption about awareness of missing vectors, while not universally true, is pragmatically valid in many contexts, including robotics and network systems where missing or anomalous data can often be detected through sensor disturbances or specific anomaly indicators. This assumption is consistent with established practices in POMDP research [4] and studies on data missingness [7-8]."
---
Thank you for your valuable comment again. If you have any other questions, please post them. We are happy to continue our communication.
Authors
---
Reply to Comment 1.1.3:
Comment: Dear Reviewer AAho,
Thank you once again for your invaluable feedback. We are approaching our discussion deadline soon, and have carefully considered your comments to provide further responses for your second feedback. Specifically, we are addressing the two main points you highlighted:
- __Extensibility of Our Problem Setting Using Mask Observation Functions__:
We wish to emphasize that our framework is not confined to **explicit** mask observation functions. It is crafted to be adaptable to a variety of scenarios where parts of the state information are unobserved. Importantly, in our revised manuscript, we present several **concrete examples** that align well with our masked observation setting, demonstrating its broad applicability beyond traditional contexts in POMDP literature.
- __Ensuring the Work's Scope is Evident from the Start__:
We have outlined specific revisions we intend to make to clearly convey the scope of our work right from the beginning. These revisions encompass the abstract, introduction, and related works sections, as detailed earlier.
In light of the upcoming discussion deadline, we sincerely appreciate your time in reviewing our work. We would be most grateful for your prompt feedback on our responses, as this will greatly assist us in adhering to the discussion timeline. Please do not hesitate to let us know if you have any additional questions or require further clarifications. We eagerly look forward to your continued feedback and hope to engage in productive dialogue swiftly.
Warm regards,
Authors | Summary: This work study the masked partial observability in reinforcement learning and propose a novel method ORDER to address this challenge. ORDER leverages the alignment of discrete state representations and significantly improves robustness and generalization performance across diverse masked partial observability. Experiments demonstrate the effectiveness of ORDER in some settings.
Strengths: This work leverages the alignment of discrete state representations learned by the fully observable offline data and discrete proxy representations for the partial observable experiences, novelly constructing a proxy policy for the partial observable online interaction, which improves the robustness and generalization for the diverse partial observation.
Weaknesses: The author claims that ORDER is suitable for heterogeneous state factors, but only provides the experiment results for simple homogeneous state factors settings.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Whether it is reasonable to convert any unobservable state factor into a single mask token $e^{[mask]}$? Is there any convincing reason or it is just a practical implementation to enable the algorithm execution? $e^{[mask]}$ is learned by minimizing (6) along with the tray encoder and prediction heads, can you explain what it means to learn such a mask token?
2. You mentioned that the state factors can be heterogeneous, but it seems that your experiments only include homogeneous cases where each dimension of the observation vector is treated as an independent state factor. I can accept it as a simplified experiment setting, but I wonder that in general heterogeneous cases where each state factor differs in type and size, how to choose the numbers of discrete codes for each factor? A common discretization degree seems unreasonable now.
3. Do some kind of heterogeneous state factors like high-dimensional multimedia data suitable for discrete representations? Since in your experiments, each state factor is a single scalar, which is an extreme simplification compared to the heterogeneous factors that you claimed, I really doubt this point.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: ORDER currently focuses on masked partial observation, leaving other forms like perturbations and noises unaddressed. Moreover, its applicability to complex heterogeneous high-dimensional real-world tasks remains to be explored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thank you for your thoughtful questions and feedback on our paper. Below, we provide detailed responses to each of your questions:
# Single Mask Token Justification
Thanks for your feedback. Your query regarding the use of a single mask token is apt. The primary intention behind using just one mask token is to provide the network with a cue that certain information is occluded, rather than detailing the specific content of the occlusion. This serves as an efficient indicator. In practice, this approach finds resonance with prevailing techniques in transformer architectures, such as in the MAE [1] model. The essence is to highlight the presence of an occlusion rather than its intricacies, for which a single token suffices.
[1] *Masked Autoencoders Are Scalable Vision Learners.* CVPR 2021.
# Addressing Heterogeneity in State Factors
Thanks for your feedback. Your observation regarding the treatment of state factors in our experiments is keen. In the mujoco tasks that we worked on, state factors are indeed heterogeneous in nature. Specifically, the state vector amalgamates two distinct types of data: the robot's sensor positions and its angular velocities.
When state factors diverge in type and magnitude, a reasonable approach to select the number of discrete codes would be to gauge the redundancy of information inherent to each state factor. In simpler terms, state factors carrying more intricate or effective information might necessitate a greater number of discrete codes. To facilitate this operation, expertise from human professionals can be sought or alternative information measurement techniques can be employed. This ensures a tailored discretization that aligns with the unique characteristics of each state factor.
# Addressing High-dimensional Multimedia Data
Thank you for your thoughtful comment. We agree that this is an important consideration. In support of our approach, we refer to a relevant study [1] in the domain of model-based reinforcement learning. This work demonstrates the efficacy of discrete representations in enhancing policy performance, particularly when dealing with high-dimensional data such as images in gaming scenarios. Notably, the findings indicate that encoding such data into discrete representations not only preserves performance but also enhances robustness by mitigating the impact of information noise associated with high-dimensional factors.
We acknowledge the potential oversimplification in our current experimental setup and will address this concern by including a concise discussion in the related work section, highlighting the relevancy of discrete representations for handling high-dimensional multimedia data.
[1] *Mastering Atari with Discrete World Models.* ICLR 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I think this paper to be generally well-done. However, it requires a more specific design approach when dealing with heterogeneous state factors. Therefore, I will maintain my current score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for recognizing the merits of our paper and offering valuable feedback.
Regarding your suggestion on a more specific design approach for heterogeneous state factors, we appreciate the insight. Indeed, different state encoders can be tailored for diverse data modalities. For instance:
- **Image Data:** We could employ a CNN to effectively capture image-related state information.
- **Vector-Based Data:** An MLP can be optimized to address vector-based state factors.
- **Text-Based Data:** For state information present in textual format, transformers can be an ideal choice.
Furthermore, by adjusting the hyperparameters related to the codebook dimension and discrete codes count, we can finely tune the information capacity for each heterogeneous state factor. This approach allows for a more nuanced and effective representation of diverse state information.
We hope this clarifies our approach and assuages any concerns you might have.
Warm regards,
Authors | Summary: In this paper, the authors present a three stage method for offline RL in POMDPs. It is assumed that the agent has access to full observations (state) during training, but only masked versions of this state (partial observations) during inference or deployment. In the first stage of training, a mapping is learnt from the original state space to a discrete state space. In the second stage, an optimal policy is learnt using offline RL which maps this discrete state to an action. Finally, in the third stage, a proxy state representation model is learnt, which maps the history of partial observations to a proxy discrete state, which is aligned with the discrete state corresponding to the underlying system state. As different state factors can be masked during inference, this method presents a solution that generalises well to a family of POMDPs obtained from the underlying MDP using different masking strategies. The authors demonstrate this method on four MuJoCo environments.
Strengths: 1. The paper is novel, very well written and easy to comprehend.
2. The paper presents generalising an RL solution of an MDP to a family of POMDPs, thereby making RL generalisable and fairly robust to several real-world problems with a type of partial observations (masked state factors).
Weaknesses: 1. There is no analysis/theory presented regarding loss of optimality due to the discrete state representation.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. [Typo] In Line 164, shouldn't the serial number be "i" instead of "ii"?
2. [Typo] In Eq(1) is a term missing after "where"?
3. How can it be proven that the discrete state obtained is indeed a state sufficient for control?
4. I did not understand the difference and the motivation between the second and the third terms in Eq (3). Can the authors please explain the same?
5. \tau is not defined. Also, the subscript corresponding to \tau is sometimes just "t-1" and sometimes "0:t-1". Can these be made consistent with a proper definition?
6. [Typo] In line 230 consider replacing "has" with "have".
7. [Typo] Reference missing (??) in Line 66 of Supplementary Material.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have covered the limitations of their method. While they do not mention the societal impact, since the paper deals with a general algorithm, the societal impact is same as in any RL case.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thank you for your thoughtful questions and feedback on our paper. Below, we provide detailed responses to each of your questions:
# Response to Typographical Comments:
1. **Line 164**: You're right, and we apologize for the oversight. It should indeed be "i" instead of "ii". This will be rectified.
2. **Eq(1)**: Thank you for pointing it out. The term $ j $ was indeed missing after "where". We will correct this in the revised manuscript.
3. **Line 230**: Your observation is accurate, and the correction has already been implemented.
4. **Line 66 of Supplementary Material**: We apologize for the omission. The intended reference is "Algorithm 4". We'll ensure it is properly cited in the revision.
# Response to the Discrete State Query:
Your query regarding the sufficiency of the discrete state for control is insightful. To demonstrate the sufficiency of our discrete representation:
1. **Performance Metrics**: We utilize standard benchmarks and performance metrics in our experiments. The fact that our method, using the discrete state representation, achieves competitive or superior performance indicates that the representation captures the necessary state information for effective control.
2. **Robustness in Varied Environments**: In addition to standard benchmarks, we evaluate our approach in a variety of environments and scenarios. The discrete state's ability to generalize across these settings further attests to its sufficiency.
3. **Comparison with Continuous Representation**: We juxtapose the performance of our discrete representation with its continuous counterpart. The discrete representation's comparable or better performance suggests its capability to serve as an efficient and sufficient control state.
In the future, one could also explore methods to directly measure the information content or fidelity of the discrete state against the true continuous state. However, for the scope of this paper, our empirical results act as a testament to the discrete state's efficacy for control tasks.
# Response to Query on Eq (3) Terms:
Your query regarding the second and third terms in Eq (3) is pertinent. To provide clarity:
1. **Purpose of Both Terms**: Both terms originate from the VQ-VAE framework, aimed at ensuring high-quality representations during training.
2. **Difference in Position of $sg()$**: The positioning of the "stop gradient" function, $sg()$, differentiates the terms. It's pivotal in dictating where the gradient propagates during backpropagation.
3. **Separate Control Mechanisms**: The distinct placements of $sg()$:
- The second term, with $sg()$, focuses on updating the codebook, ensuring it reflects the encodings accurately.
- The third term, conversely, guides the encoder learning so that its outputs align closely with the existing codebook entries.
4. **Intuitive Takeaway**: Think of it as a dance between two partners, the encoder and the codebook. The second term ensures the codebook learns to match the encoder's steps (representations). Simultaneously, the third term ensures the encoder does not stray too far from what the codebook knows, offering a consistent dance.
By carefully balancing these terms using scale coefficients, we can achieve a harmonious alignment between the encoder and the codebook, ensuring optimal representations.
# Response to Query on $\tau$ Definition:
Thank you for pointing out the inconsistency regarding $\tau$. We apologize for the confusion.
1. **Definition of $\tau$**: As highlighted, $tau$ is indeed defined in lines 122-123 of the manuscript.
2. **On Subscript Clarification**: We understand the confusion and regret the inconsistency. To clarify, the notation "$\tau_{t-1}$" is a typographical error. The correct notation is "$\tau_{0:t-1}$", which denotes the entire trajectory from the start up to "t-1".
We will rectify the inconsistency throughout the manuscript to ensure "$\tau_{0:t-1}$" is used consistently and appropriately. Your feedback is instrumental in enhancing the clarity of our work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for incorporating some of my suggestions and also addressing my questions. I am still not clear how the discrete state representation is a sufficient statistic or an information state. The argument that this representation empirically yields better results does not necessarily entail good performance in other environments. I thank the authors on the clarification for the various terms in Eq(3). This helped me in understanding the equation.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you once again for your invaluable feedback. To address the question of sufficiency of the discrete state for control, we rely on a two-pronged approach:
1. **Reconstruction Capability**: The loss function in Equation (3) contains a reconstruction term as its first component. This term is designed to maximize the information encapsulated in the discrete state so that it can adequately reconstruct the original state. In essence, if the discrete state can accurately reconstruct the original state, it carries enough information to be useful for control tasks.
2. **Hyperparameter Tuning**: We acknowledge that discrete representations can lose some information compared to the original states. To counteract this and ensure the discrete states are sufficient for control, we offer flexibility in the model through hyperparameters. Specifically, users can adjust the number of discrete codes and the dimension of the codebook. This fine-tuning enables the model to capture the necessary amount of information to maintain control quality.
Through these mechanisms, we aim to ensure that the discrete states generated are indeed sufficient for effective control. | Rebuttal 1:
Rebuttal: **Response to Reviewers:**
Dear Reviewers,
Thank you for your comprehensive feedback on our manuscript. We have addressed each comment individually in the subsequent sections. Additionally, an attached PDF provides further clarifications with relevant tables.
Your insights are crucial for refining our work, and we trust our responses and modifications align with your suggestions.
Looking forward to your continued feedback.
Best regards,
Authors
Pdf: /pdf/313eaffe2246ff6c6d00d2ba245caf5279c658f8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Can You Rely on Your Model Evaluation? Improving Model Evaluation with Synthetic Test Data | Accept (poster) | Summary: This paper contributes a new model evaluation framework called 3S-Testing, which uses (conditional) deep generative models to create synthetic test sets. To provide uncertainty estimations, 3S uses deep generative ensemble method. It is empirically confirmed that the better performance on small subgroups (with real data), and distributional shifted data.
Strengths: The motivation is clear by targeting the subdominant class. Using generative models (and generative ensembles) to evaluate model is an encouraging attempt, especially for uncertainty estimation. The experiment section is clearly organized and easily understood.
Weaknesses: $\bullet$ The major concern is to evaluate a model, more uncertified models are introduced even though they are used for uncertainty estimation. The author(s) gave reasoning from line 155 to 161. The first is reasonable, while the second is not convincing. For example, in figure 2, the samples of green star class are sparse in the test set by nature. Generating samples of green star class becomes (b) is a “bias” introduced by the generative model.
$\bullet$ 4.2 is hard to read. While the intention was to keep the discussion for shift as general, more and more notations and assumptions for simplifications are continuously added, sometimes without explanation (see below in questions).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: $\bullet$ How to estimate the biases of the generative models on the subgroup, besides providing uncertainty estimation? One possible case is mentioned in weakness, where the generative model generates new samples based on some its own bias.
$\bullet$ Lines 195-198 is confusing to me. After G is trained, how exactly is it used? Steps (2) to (4) are not clear about this. I understand it is talked about later, but it is important to make it clear in the summary paragraph.
$\bullet$ Lines 207-208, Where does that assumption come from?
$\bullet$ Can the author(s) please briefly summarize the discussion (with minimum information/notation/assumptions needed) for lines 204-218? More specifically, does the final recipe proposed in lines 217 to 218 rely on the assumption “simplest such shift” (line 212)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Fully discussed with explanation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your time and constructive feedback. We wish to clarify each point in turn.
## 1. Reasons for using generative models.
The paper only uses a deep generative ensemble (DGE) [24] to model the uncertainty over the generative model parameters and estimate generative errors. We acknowledge that estimating errors is hard, and this ensemble approach does not come with guarantees—to do so we would instead need to model the intractable posterior over the generative model parameters. Nonetheless, though the 3S estimate may or may not be unbiased, we believe our consistently better evaluation results give strong evidence it can provide a significantly better bias-variance trade-off than real data evaluation, and in turn we hope to have shown the value of synthetic data for evaluation purposes.
**Figure 2b**. Putting this in the context of Figure 2b: we agree that just because the manifold seems plausible, this does not mean it is the true underlying distribution. A different member of the DGE would have converged to a slightly different manifold, caused by a lack of evidence for preferring one manifold over another. By using DGE, we aim to implicitly model our uncertainty over the “correct” manifold.
**Generative models for implicit interpolation/weighting of real data**. At last, the paper’s second argument (L158) for using generative models for evaluation (for which you raise concerns) extends further than learning manifolds. For example, assume you want to estimate the expected performance of model $f$ for a specific person with features $x_c^*$ (a subset of all features). This is hard with real data alone: for continuous variables $X_c$, the probability that there exist points in $D_{test,f}$ with exactly $x_c^*$ is zero, hence $A(f; D_{test,f}, (x:x_c=x_c^*))$ is undefined. There may be many points in $D_{test,f}$ with $x_c$ close to $x^*_c$, but which metric do we use to define “close”, and given this metric, how do we weigh close points to compute $A$? This is non-trivial, because the target distribution may be very sensitive to some feature changes, while being independent of others (hence an Euclidean metric may not be good). Here too, we may expect that a generative model can actually do better. The generative model can “interpolate” the distribution over the whole space, and hence allows generating points with $x_c = x^*_c$. Of course, this may not be perfect, but again the DGE (where each member interpolates differently) gives insight into the uncertainty.
## 2. Section 4.2 unclarities
### L195-198
**To ensure readers are not confused, we will include the simplified recipe first and move the general shift to the end of the section.**
**General shift**. We can envision many types of shifts that we do not cover in the experiments, but which are also supported by the recipe on line 195 (e.g. concept drifts, see footnote 1 and [Varshney, 2021]). Regardless, we agree that the paper would be clearer if we are not too general when starting Section 4.2 and instead focus on our main setting of interest—i.e. where we make the assumptions of paragraph “Defining shifts” (specifically L204) and where the trained $G$ approximates $p(X_\bar{c}|X_c)$.
**How is $G$ used in practice**. In this case, the 3S evaluation recipe simplifies to: (i) train $G$ to approximate $p(X_\bar{c}|X_c)$, (2) choose a shifted distribution $p(^\sim X_c)$---e.g. a marginal mean shift of the original $p(X_c)$ (Section 5.2.1), or drawing $x_c$ samples from a secondary dataset (Section 5.2.2); (3) draw samples $x_c$, and subsequently *use $G$ to generate the rest of the variables $X_{\bar{c}}$ conditional on these drawn samples*; and (4) evaluate downstream models.
### L207-208
We use this assumption because it captures two of the most studied assumptions in distributional shift literature; $p(X|Y)$ is fixed and $p(Y)$ changes, or $p(X)$ changes and $p(Y|X)$ is fixed (see lines 201-203, and [Varshney, 2021; Section 9.2.1]). We have rewritten footnote 2 to make this link more explicit.
### L204-218
In summary, we focus on shifts in which some variables’ distribution changes, but the other variables’ distribution conditional on these variables does not. From L211 we discuss the simplest such shift (where $X_c$ is a single variable), because it leads to easily-graspable insight for users. The final recipe (L217) does not depend on this simplest shift. Let us elaborate.
In paragraph L211 we focus on simple one-dimensional shifts (in a single variable $X_c=X_i$), because it is easier to define, visualise, and understand the effects of these shifts hence get meaningful insight into the model. For example, in 5.2.1 we shift the mean of $X_i$, use $G$ to generate data conditional on the shifted $X_i$, and study the effect on downstream model scores; if we had done this with two shifted variables instead, this would require generating and testing over a 2D set of shifted distributions—possible, but not as easy to understand.
Choosing a shift in a multi-dimensional $X_c$’s distribution is not always hard however. For example, as shown in Section 5.2.2, if we observe some of the same features in another dataset, we can simply draw $X_c$ from this dataset and avoid having to define the shifted distribution explicitly. To do so, we assume the variables we observe ($X_c$) accurately describe the shift between our original domain and the new domain, such that the generated data will indeed resemble the target domain.
In any case, the recipe in L217 is independent of the shift distribution $p(^\sim X_c)$. A reasonable constraint is that the shifted distribution “fails within” the old distribution (i.e. $p(X_c)$ dominates $p(^\sim X_c)$). E.g., if $G$ generates medical data based on age and it has only ever seen patients up to 100 years old, it will yield unexpected behaviour when generating 110-year-old patients.
## References
Varshney KR. Trustworthy machine learning. 2021. Chappaqua, NY. p. 118.
---
Rebuttal Comment 1.1:
Title: Thank you for your explanations!
Comment: Thank you for your reply. The explanations are much clearer than what was written in the paper.
According to the explanation, G is not learning p(X) as stated in line 196, but a conditional probability. It aims to generate more (corrupted maybe) samples depend on the shift, doesn't it?
Please consider using these simplified and clearer explanation when editing the draft. I have raised the score. Thank you for your explanation and good luck.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
We are glad our response clarified matters, and would like to thank you for raising your score.
Indeed, for the shifts and subgroups considered in the paper, it is more efficient to use a conditional generative model—this allows us to directly generate more samples depending on the shift or subgroup. We could use an unconditional generative model, but this would require generating a lot more data and then using post-hoc sampling to satisfy the subgroup/shift definition.
We agree that the simplified explanation (from our previous response) would be clearer in the context of the rest of the paper and will use it to start Section 4.2.
We appreciate your time taken to improve the paper! | Summary: The authors propose to use synthetic data to evaluate models, especially under distribution shifts or in areas of the input space with low coverage. The authors use CTGAN to empirically validate their idea, and apply it to tabular data.
The paper is a resubmission from ICLR 2023 ( https://openreview.net/forum?id=J7CTp-jNyJ ).
Strengths: Originality: While the general idea of using synthetic data for evaluation does not appear new, to the best of my knowledge this is the first thorough evaluation of the idea for tabular data. (I went out of my way to find published studies on this, and the only works I found were on relatively low-quality journals, or very different in scope).
Quality: Experiments were done on 6 datasets of relatively small scale. This gives a first idea that the idea might work, but mention two possible extensions under "Weaknesses".
Clarity: the paper clarity was fine.
Significance: I think this is an important topic that deserves more study, the results of which will be of interest primarily to practitioners, but might also encourage further research on the topic.
Weaknesses: * The paper mostly focuses on 6 small datasets. While the approach itself is probably interesting in cases where data is scarce, it would be interesting to see this applied to harder datasets that have more than just a couple of dozen features, or where there are millions of samples involved. I personally remain unconvinced the method scales to larger data, and would appreciate if the authors could report results on larger datasets.
* The authors write "* "an end-user only has access to a single draw of Dtest,f . e.g., we might incorrectly overestimate 265 the true performance of minorities. The use of synthetic data solves this.". I'd like to challenge that statement: I think it's very typical in real life (especially given the relatively small data sets that are the focus in the publication) to at least use cross validation to get multiple test sets, potentially even Leave-1-Out CV. This should give a much better estimate of the loss landscape. I'd appreciate if the authors could add such experiments as baselines.
* It would be nice if the authors gave an indication of run times of the various methods they compare too, as this might be useful to practictioners. E.g. They mention in Appendix section C that the GAN was trained with a fairly large hyperparameter search, which I expect is not cheap.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Figure 4: It is unclear what the x-axis actually represents. It would be nice to get an explanation of what the Different Groups mean. The way it is presented, I don't understand what I am looking at.
Line 180: "similar in vain to Deep Ensembles" => similar in vein
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors mention limiations in passing, but have not gone out of their way to find out failure modes of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your time and feedback. We would like to address each of your comments in turn.
## 1. Large datasets and other limitations
We agree that the paper would benefit from the inclusion of specific failure cases, including very large datasets for which 3S may not be necessary. We discuss this failure case in the general response under “Limitations and Failure Cases”, as we think this discussion will be of interest to some of the other reviewers too. In a nutshell, we include a new experiment with the already-used Bank dataset (test set 75k+) and see that the benefit of 3S is determined primarily by the size of the subgroup we are trying to measure. If this subgroup is very small, we benefit from 3S; when subgroups get larger, real data evaluation is sufficient. We can use the bootstrap-estimated variance of the real dataset estimate to decide when to use 3S or not.
## 2. Cross-validation — why its not applicable to our setting & alternatives
We are in the setting where we have a trained black box predictive model $f$, which could have been trained by someone else. For instance, the increasingly common scenario where a trained model is behind an API, yet we still wish to test the model’s capabilities. An example of this was during the pandemic when prognostic models were built in certain countries and were being tested by external parties who did not have access to the original medical training set for data sharing and patient privacy reasons. In this setting, we can only access the model and not the training dataset (see L112-113) — with the test dataset ($D_{test,f}$) our only available data. Of course, this is a challenging setting for 3S as well, since we train the generative model $G$ on $D_{test}$—as we have no access to a (potentially large) $D_{train}$. Since we do not have access to $D_{train}$, we cannot perform cross-validation to obtain multiple train-test splits.
Nonetheless, we agree with your suggestion and there are alternative ways to obtain multiple “test sets”. A common approach which would fit our problem setting and help get multiple test sets is **bootstrapping** — which allows us to obtain multiple datasets. We have included a bootstrapping baseline in the form of Model-Based Metrics (MBM) [20] — which does a computationally efficient version of nonparametric bootstrap. We outperform MBM both on performance vs Oracle in Figure 4 and coverage of the true value with intervals in Figure 5. To make this more clear to the reader we will update Sec 5.1.1., clarifying that MBM does bootstrapping and that our rationale for inclusion as a baseline is to be an alternative statistical mechanism for obtaining multiple test sets.
## 3. Run times.
We agree that point made about showing run times, especially since we tune a generative model. Please refer to point 4 in the general response where we outline the computational cost. As a summary, for typical tabular datasets, the computational requirements are often still very manageable — under 5 min for most datasets and under 40 min for the largest dataset Bank (see Response pdf). Of course, tuning will depend on dataset size and the numbers we report in the table in the response pdf are with respect to the dataset sizes in the main manuscript.
## 4. Unclear subgroups Figure 4.
We regret that the groups in Figure 4 were unclear. The subgroups correspond to possible values one particular categorical feature can take.. This differs per dataset: for Adult, it is different Race groups; for Drug, Ethnicity groups; for Covid, the Region the patient comes from; for Support; race groups; for Bank, different Employment statuses. Since the specifics of each group are challenging to put in the figure without harming readability, we will add an Appendix that specifies the different groups for each dataset.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications, and for adding an additional dataset. After seing all the additional reviews, my thoughts on this manuscript are as follows: using generative models to have more synthetic data is certainly an attractive idea. The area feels under-explored and in need of a very thorough analysis: though I found this idea mentioned very often, I have never seen it very rigorously evaluated. To be candid, this work feels a falls a bit short of my standards for a "very rigorous evaluation": it evaluates on fairly few datasets, and has little exploration of errors in the generative process (e.g. it pretty much glosses over how to pick a suitable generative model). So all in all I feel that my original rating of a "weak accept" is still appropriate: the work is okay and has no major flaws. I think with more effort it could even become a hallmark paper on the topic, but in its current state, it falls a short of that.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We agree that the area of generative data augmentation is underexplored and too often poorly evaluated. We hope to have shown the benefit of synthetic data for testing small subgroups and shifts across a range of tabular datasets (and subgroup definitions). Many thanks again for your time and effort reviewing the paper. | Summary: This paper proposes to use synthetic test data to improve the estimation of model performance for tabular datasets when insufficient test data is available. Their approach of generating synthetic test data conditioned on subgroups improves performance estimation for underrepresented subgroups and can accurately estimate the model performance under distributional shift towards the underrepresented subgroups.
Strengths: The paper is well-written, and the details are covered by the Appendix.
They have done a good job in showcasing the advantage of their synthetic strategy.
Weaknesses: Their proposed method is limited to the tabular data and data synthesis has been looked at in prior studies, but I think this is a good paper with nice experiments.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Q1. Extreme underrepresentation of the data in some regions, i.e., small subgroups, could affect the training of the generative model as well. At what point does your method fail, i.e., it is simply as good as Dtest strategy? Can you show this for different datasets studied in Figure4?
Q2. How do you justify the 3S+ performing worse than 3S for the bank dataset especially for decision tree and random forest in Appendix Figure 10 and Figure 4?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your time and feedback. We would like to address each of your comments in turn.
## 1. Extreme underrepresentation failure case.
We agree that the paper would benefit from the inclusion of specific failure cases, including very small subgroups. We discuss this failure case in the general response under “Limitations and Failure Cases”, as we think this will be of interest to some of the other reviewers. In particular for small subgroups, we see that a practitioner can use the 3S uncertainty bounds to decide whether to trust an estimate. For extremely small subgroups the uncertainty bounds of 3S are usually larger and may urge a practitioner to gather additional data.
## 2. Bank dataset 3S and 3S+ performance comparison.
Indeed, in Figure 4, 3S+ (pink) has lower MAE vs the Oracle compared to 3S (green) for the small groups for the random forest classifier. In the Appendix, we see that real data ($D_{test}$) has a very large error when estimating the performance of the decision tree—only implying this particular real test data is ill-suited to probe this particular shallow decision tree. Because 3S+ consists of 3S and real test data, it partly inherits the poor performance from the $D_{test}$ estimate and thus leads to poorer estimates than 3S for this particular instance.
---
Rebuttal Comment 1.1:
Comment: Thanks. I am satisfied with the answers to my questions.
---
Reply to Comment 1.1.1:
Comment: Thank you! And thanks again for your time and suggestions. | Summary: In this paper, the authors propose the utilization of synthetic data for evaluating models and introduce an automated suite of synthetic data generators called 3S. 3S offers two key advantages: it enables reliable and detailed evaluation, and it measures model sensitivity to distributional shifts. The paper explores different scenarios involving 3S, providing valuable insights into the application of synthetic data for model evaluation.
Strengths: 1. The paper exhibits a clear and easily comprehensible writing style. The authors conduct a comprehensive examination of relevant literature, effectively summarizing its advantages.
2. The issue investigated in this paper, namely the utilization of synthetic data for evaluating models, is both intriguing and significant.
3. The authors effectively discuss various use cases of 3S, offering insights into the practical utilization of synthetic data for model evaluation.
Weaknesses: 1. Further discussion on method limitation is needed.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: No questions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: As mentioned, further discussion on method limitation is needed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Many thanks for your time and positive feedback.
We agree that the paper would benefit from an extended limitation section and specific failure cases. We discuss these in the general response under “Limitations and Failure Cases”, as we think this discussion will be of interest to some of the other reviewers too.
---
Rebuttal Comment 1.1:
Comment: Great. Good luck!
---
Reply to Comment 1.1.1:
Comment: Thank you! And thanks again for your time and suggestions.
Regards
Paper 12138 Authors | Rebuttal 1:
Rebuttal: Dear reviewers,
Some reviewers suggested a longer limitations discussion, others were interested in specific failure cases. We address these points here. In the camera-ready paper we will extend the discussion to reflect these.
## Limitations
We summarise 3S’s main limitations (where applicable referencing limitations already outlined in the paper).
1. **Subgroups in real test set are large**. When the aim is subgroup evaluation and there is sufficient real data for this subgroup, the real data estimate on $\mathcal{D}_{test,f}$ will be sufficiently accurate and there is no need to use 3S given the computational overhead. See *Failure Case 1* for new experiments and further discussion. Note, even for very large datasets, there can be very sparse regions or small subgroups for which using 3S is still beneficial. Also note that this limitation mostly applies to subgroup evaluation and less to Generating Synthetic Data with Shifts (Sec. 4.2), because performance estimates for possible shifts is less trivial using real data alone (e.g. require reweighting or resampling of test data) and we show (Sec. 5.2, Table 1 & Fig. 6) that 3S beats real data baselines consistently.
2. **Possible errors in the generative process** (L177, L402 & Fig. 7). Errors in the generative process can affect downstream evaluation. This is especially relevant for groups or regions with few samples, as it is more likely a generative model does not fit the distribution perfectly here. By replacing the generative model by a deep generative ensemble [24], 3S provides insight into its own generative uncertainty. When there is very little data and 3S’s uncertainty bounds are too large, practitioners should consider gathering additional data. See *Failure Case 2* below.
3. **Not enough knowledge or data to generate realistic shifts** (L398, Sec. 5.2.2). The success of modelling data with distributional shifts relies on the validity of the distributional shift’s assumptions. This is true for 3S as much as for supervised works on distributional shifts (e.g. see [Varshney, 2021; Sec. 9.2.1]). In Sec. 5.2.2 we show how a lack of shift knowledge may affect evaluation. We aimed to generate data to test a model on UK patients, using mostly data from US patients. We do not define the shift explicitly—we only assume that we observe a small number (1 to 4) of features $X_c$ (hence the conditional distribution of the rest of the features conditional on these features is the same across countries). In Fig. 6b, we show the assumption does not hold when we only see one feature—this lack of shift knowledge is a failure case. When we add more features, the assumption is more likely to approximately hold and 3S converges to a better estimate. We reiterate that invalid shift assumptions are a limitation of any distributional shift method, e.g. the rejection sampling (RS) baseline using only real data is worse than 3S overall.
4. **Computational cost** (L404). The computational cost & complexity of using generative models is always higher than using real test data directly. For typical tabular datasets, the computational requirements are often very manageable: under 5 min for most datasets and under 40 min for the largest dataset Bank (see Response pdf). The reported times correspond to the dataset sizes in the paper. We would like to emphasise that the cost at deployment time for a poorly evaluated model can be unexpectedly high, which warrants 3S’s higher evaluation cost.
Additionally, pre-implemented generative libraries (e.g. Patki 2016, Qian 2023) can accelerate the generative process, automating generator training with minimal user input.
## Failure cases
For limitations 1 & 2 mentioned above, we include two new experiments highlighting failure cases on two extreme settings:
### F1. Subgroups in real test set are large
With sufficiently large real data, 3S provides no improvement despite higher complexity. We can determine sufficient size by estimating the variance Var$(A)$ of the performance metric $A(f;D,S)$ w.r.t. _the random variable_ denoting the test data $D$. With only access to one real dataset $D_{test}$ however, we can approximate $Var(A)$ via bootstrapping (Fig. 5). If Var$(A)$ falls below a small threshold $\alpha$ (e.g. 0.05), practitioners may for example decide to trust their real data estimate and not use 3S, as further evaluation improvements are unlikely. In our Bank dataset experiment (500k examples), we vary age & credit risk thresholds, retaining samples above each cut-off to shrink the test set (**see Fig. 2 - response pdf**). Note that for large datasets but very small subgroups, the $D_{test,f}$ estimate still has a high variance, (reflected in the large bootstrapped Var$(A)$ ), hence this should urge a practitioner to use 3S (see Fig. 2 response pdf).
### F2. Large uncertainty for very small test sets
At the other extreme, when there are _too few_ samples the uncertainty of 3S (quantified through a DGE [24]) can become too large. We include a new experiment (**Fig. 3, response pdf**) in which we reduce subgroups to fewer than 10 samples in the test data. We train $G$ in 3S on the overall test set (which includes the small subgroup) with $n_{samples}$. Despite good performance versus an oracle, the uncertainty intervals from 3S's ensemble span 0.1-0.2. These wide intervals make the 3S estimates unreliable and less useful, and would urge a practitioner to consider gathering additional data. The key takeaway: With extremely sparse subgroups, the large uncertainties signal that more data should be gathered before relying on 3S's uncertain estimates.
## References
Patki, N., Wedge, R., & Veeramachaneni, K. (2016, October). The Synthetic Data Vault. IEEE.
Varshney KR. Trustworthy machine learning. 2021. p. 118.
Qian, Z., Cebere, B. C., & van der Schaar, M. (2023). Synthcity: facilitating innovative use cases of synthetic data in different data modalities. arXiv preprint arXiv:2301.07573.
Pdf: /pdf/95f013bb2a716a5275fb853bb3fb3d8ba1b52704.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a model evaluation framework, generating the synthetic test to mitigate the challenges of model evaluation with limited real test sets, such as unbalanced subgroups and distribution shifts.
Strengths: The idea of using synthetic data to improve the testing and evaluation of machine learning models is impressive.
This paper analyzes clearly and reasonably the failure of real test data and its corresponding challenge for reliable model evaluation.
Weaknesses: **1. Counterintuition.** Although the empirical benefits of syntenic data are observed in the experimental parts, the inequation of Line 155 is not evaluated theoretically. More precise statements are needed here.
**2. Overfitting of deep generative models.** With limited test data in the small subgroup, deep generative models tend to perform overfitting. In this case, the learned manifold could have a huge gap from the real manifold.
It could be better to show more visualizations in Figure 2. In detail, if we change the test samples in the green subgroup, what will happen when we compare synthetic manifolds and the real manifold?
**3. Failure case.** As a model evaluation framework with synthetic data, which makes a lot of sense in the real world, this paper lacks failure cases to show the limitations of their works.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please check the weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please check the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your time and feedback. We would like to address each of your comments in turn.
## 1. Counterintuitive [line 155]
We do not want to give the impression that the equation in line 155 holds in general—though we hope the experiments are convincingly consistent and argue it very often will. To mitigate us seeming to make any theoretical claims here, we have changed line 155 to "It may seem counterintuitive that $|A^*-A(f;D_{syn}, S)|$ **would ever be lower than** $|A^*-A(f;D_{test,f}, S)|$, ...” and we will link to an extended limitation section with failure cases—see point 3 below.
## 2. Overfitting of deep generative models and learnt synthetic data manifold
We agree that overfitting in generative models is a serious problem, possibly leading to serious errors (e.g. in learnt manifold). In practice however, we have seen that the real manifold seems very well approximated. In the response pdf, Figure 1 we follow your suggestion and include a T-SNE of the real test data, an oracle set, and synthetic data. We observe that the synthetic data displays the same microstructures/manifold as the oracle set.
*DGE*. To mitigate (and provide insight into) generative error, 3S uses a deep generative ensemble (DGE) [24] to model the uncertainty over the generative model parameters. Putting this in the context of Figure 2b: we agree that the learnt manifold may vary in practice, because there is a lack of evidence for preferring one manifold over another—e.g. a different member of the DGE would have converged to a slightly different manifold. By using DGE, we aim to implicitly model our uncertainty over the “correct” manifold.
## 3. Failure cases
We agree that the paper would benefit from an extended limitation section and specific failure cases. We discuss these in the general response under “Limitations and Failure Cases”, as we think this discussion will be of interest to some of the other reviewers too.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for this detailed response. My concerns are well-addressed. I'd like to improve my score to "weak accept".
---
Reply to Comment 1.1.1:
Comment: We are happy to hear your concerns have been addressed and will make changes to the revised paper to reflect this discussion. Thank you for reconsidering your score, and thanks again for reviewing our paper. | null | null | null | null | null | null |
Pre-Training Protein Encoder via Siamese Sequence-Structure Diffusion Trajectory Prediction | Accept (spotlight) | Summary: The authors propose to use the diffusion on both the protein sequence and the protein structure for pretraining. Additionally, they also take the correlation between different conformers of the same protein into consideration and maximize the mutual information between their trajectories via mutual denoising. Specifically, different conformers of the same protein are generated by perturbations on the side-chain torsional angles. Experiments on Enzyme Commission classification and four Atom3D tasks demonstrate the benefit of diffusion-based pretraining and mutual denoising.
Strengths: 1. The idea of maximizing the mutual information between diffusion trajectories of different conformers is interesting and inspiring. The authors also show the objective can be transformed into a loss function for mutual denoising between two relevant trajectories.
2. The evaluation is comprehensive. The authors compare with different pretraining strategy and conduct many ablation studies. They also explore the effect of the size of the pretraining data and different diffusion models.
Weaknesses: 1. Random perturbations on the side-chain torsional angles may produce conformations with high energy, which does not reflect the actual "physics underlying the conformational change" as claimed in line 51-52. This may also downgrade the effect of the mutual denosing scheme. As shown in the ablation studies, discarding the mutual information maximization scheme only has very minor effect on the performance.
2. The organization of the paper can be improved. The main insight lies in how to maximize the mutual information between representations of Siamese Trajectories. However, the major illustration of this contribution is put into the Appendix while the authors spend a great deal of time introducing the preliminaries on diffusion.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Do you think the mutual denoising scheme can also work on homologs as you have mentioned the problem in line 50? There might not be a 1-to-1 mapping on the node representations when it comes to homologs since they may have different number of residues.
2. What do you think is the reason that diffusion-based pretraining produces better results than denoising score matching pretraining in your experiments? In Section 4.4, you have mentioned that perturbing distance matrices may produce negative values, however, it is also natural to do denoising score matching pretraining directly on the absolute coordinates [1][2]. Could the denosing score mathching baseline be underestimated because it is implemented on the distance matrix instead of the absolute coordinates?
3. Can you theoretically show how much deviation will be introduced by the approximation in line 743 (Appendix C.1)? For example, can you derive the bound of the gap between the lowerbound of mutual information and the loss you derived from the RHS of the approximation?
4. What is the value of $t$ in the diffusion model in the fine-tuning phase?
[1] Pre-training via Denoising for Molecular Property Prediction
[2] Energy-Motivated Equivariant Pretraining for 3D Molecular Graphs
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Random perturbations on the side-chain torsional angles may not conform to the actual distribution of the conformational change.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments and golden suggestions! We respond to your concerns as below:
>**Q1: The random side-chain perturbation does not reflect the actual "physics underlying the conformational change**
Please see the global response for details.
>**Q2: Discarding the mutual information maximization scheme only has very minor effect on the performance**
We respectfully disagree with the reviewer on this point. It should be noted that the SiamDiff method without mutual information maximization scheme is exactly the DiffPreT method. As shown in Tables 1 & 2, the improvement of SiamDiff over DiffPreT is consistent and significant, especially on residue-level tasks.
>**Q3: The organization of the paper can be improved. The major illustration of this contribution is put into the Appendix while the authors spend a great deal of time introducing the preliminaries on diffusion.**
Thanks for the suggestion! In the revised version, we will move a portion of preliminaries of diffusion models to the appendix, thus leaving more space to discuss and evaluate the effect of different diffusion design choices and conduct more downstream evaluation. We believe this can be done in the camera ready version with an additional content page limit.
>**Q4: Do you think the mutual denoising scheme can also work on homologs as you have mentioned the problem in line 50?**
This is an interesting question! It is indeed possible to use structural homologs instead of randomly generated conformers for mutual denoising. Though there might not be one-to-one residue mapping, we can still use the alignment tools between protein structures [a,b] to get residue/node correspondence for mutual denoising. Nevertheless, extra efforts are required to study whether these realistic conformers are better for pre-training, as discussed in our response to Q1. We leave this question to future work.
[a] Zhang, Yang, and Jeffrey Skolnick. "TM-align: a protein structure alignment algorithm based on the TM-score." Nucleic acids research 33.7 (2005): 2302-2309.
[b] Holm, Liisa. "Using Dali for protein structure comparison." Structural Bioinformatics: Methods and Protocols (2020): 29-42.
>**Q5: What do you think is the reason that diffusion-based pretraining produces better results than denoising score matching pretraining in your experiments? Could the denosing score mathching baseline be underestimated because it is implemented on the distance matrix instead of the absolute coordinates?**
Regarding denoising score matching, we agree that it is more appropriate and intuitive to add noises directly to the absolute coordinates instead of distance matrices. In our ablation study, the baseline (SiamDiff w/o sequence diffusion) can be considered as an augmented version of denoising score matching. The superiority of SiamDiff over denoising score matching can be attributed to two main factors: sequence-structure joint diffusion and multi-level noise scheduling, both of which have been extensively discussed in our ablation study.
Furthermore, as mentioned in Sec. 3.4, another drawback of previous denoising-based molecular pre-training methods is that they typically treat the noise level as a hyperparameter to be tuned [a]. This presents a significant challenge in selecting the optimal hyperparameter for pre-training, as it becomes difficult to capture both coarse- and fine-grained features effectively.
[a] Zaidi, Sheheryar, et al. "Pre-training via denoising for molecular property prediction." ICLR, 2023.
>**Q6: Can you theoretically show how much deviation will be introduced by the approximation in line 743 (Appendix C.1)? For example, can you derive the bound of the gap between the lower bound of mutual information and the loss you derived from the RHS of the approximation?**
Thanks for your careful reading and insightful question! The approximation made in line 743 is based on replacing representations with diffusion trajectories. The latter is easier to sample and provides more informative targets for recovery. Therefore, this decision is more of a practical choice. In theory, it would be possible to derive a lower bound for the error introduced in this approximation by introducing some assumptions about the information loss between representations and trajectories. However, since the primary focus of the paper lies in the application of the pre-training algorithm, the presented theorems serve as a justification for the reasonableness of our method and only consider rough approximations. Characterizing theoretically the deviation introduced by such approximations is a very interesting direction for future work.
>**Q7: What is the value of $t$ in the diffusion model in the fine-tuning phase?**
We want to clarify that we only introduce noises in the pre-training stage and will use the original protein in the fine-tuning stage, which is common in denoising pre-training methods [a,b]. The aim of pre-training is to make the learned representations capable of capturing structural and sequential details. We don’t need to introduce additional noise for downstream tasks. We will clarify this point in the revised version.
[a] Zaidi, Sheheryar, et al. "Pre-training via denoising for molecular property prediction." ICLR, 2023.
[b] Liu, Shengchao, Hongyu Guo, and Jian Tang. "Molecular geometry pretraining with se (3)-invariant denoising distance matching." ICLR, 2023.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks for your detailed response, which largely alleviates my concerns. Thus I raised my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your suggestions and response! We'll follow your suggestions to continue to work on the revision of the paper. | Summary: The authors propose a novel pre-training method for proteins by jointly modeling sequences and structures with a diffusion model (DiffPreT). The encoder, which is the noise prediction network, learns representations for the sequence and the structure, respectively. This representation can then be used for downstream tasks. Additionally, the authors sugest an extension to their method to capture the correlation between conformer structures, based on the maximization of mutual information between diffusion trajectories for different simulated conformers, obtained by perturbing side chain rotations.
Strengths: The authors bring forward some very interesting ideas and the results are promising, possibly signifying a step forwards within the field of protein representations. Moreover, it is clear that an extensive amount of work has gone into running experiments, including ablation studies and additional results in the appendix.
Weaknesses: The paper is confusing and unclear in some places, both in terms of text and in terms of mathematical notation. There are some essential parts of the method that are not properly explained and/or motivated, and not all results seem to indicate a significant improvement. Finally, I think the results could be presented in a more diverse way, rather than only showing big tables.
For comments and suggestions on how to imrpove the manuscript, I refer to the "Questions" part.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Main questions / comments:
1. In the DiffPreT setting (and to some extent also in the SiamDiff setting), it is unclear to me what noise level is used to get the representation for downstream tasks. It is mentioned that mutiple levels of noise are used, but it remains vague wether this means concatenating respresentations at multiple noise levels, all noise levels, or picking a noise level at random when getting the representation for the downstream task.
2. Maybe even more generally, could you give some intuition on why representations on noisy structures could serve better for downstream tasks?
3. Could you comment on the size of the latent representations that you get? I might have overlooked it, but I missed what the size of $d$ is. This seems like important information, especially when the SiamDiff setting is used where all these representations seem to be concatenated.
4. For the siamDiff setting, it is not intuitive to me what kind of distribution is really being modelled. Randomly rotating the side chains without perturbing the backbone, and simply throwing away structures where there are clashes seems a bit messy. Is the resulting something "biologically relevant"?
5. When considering the standard deviation of some of the results, the improvements are not always statistically significant. Perhaps it would be good to acknowledge this.
6. The structuring of the paper is a bit off in some places, e.g. 3.4 is showing results even before all methods are explained, and the discussion in 4.4 is more a "related works" section, which has mostly been discussed in the introduction already, and which somewhat disrupts the flow of the paper.
7. From the main text, it took me a long time to discover that the encoder $\phi$ is the noise predicting network, and even longer to see that it is GearNet-Edge (and GVP in the appendix). This could be more clear earlier on in the manuscript.
8. In the appendix it is stated that GearnNet-Edge is a structural encoder. However, from the main paper it seems like you get a representation for both the sequence and the structure. How does this work? Are these separate networks?
9. All results are presented as big tables, it would help to include some intuitive graphical results. Perhaps visualize the protein structure / sequence reconstructions of the diffusion model, or somehow visualize the representations (for example see if clusters appear when PCA is done on the representations or something similar).
- Other questions / comments:
10. The term "encoder" can be confusing, as the forward process of a diffusion model can also be seen as a series of unparameterized encoders. It would help if this distinction is clear from the beginning.
11. Perhaps a missing reference for protein structure-sequence co-design: Lisanza et al. [2023]
12. The task explanation in the main paper is very minimal. I appreciate that there is a more extensive description in the appendix, but it could still be much more elaborate.
13. Sometimes abbreviations are introduced before their meaning (e.g. EC, ESM).
14. The "Mean Rank" seems like a strange metric to me that does not add much to all other results.
15. The results using GVP are interesting and would maybe be worth including in the main paper, including a discussion on why some scores are much higher than for GearNet-Edge (except for PSR).
16. Some discussion on computational cost is missing, either in the main paper or in the appendix.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors discuss limitations in the appendix. It would be nice if some of this discussion is transferred to the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful reading! While some questions may be due to misunderstanding, we find that your suggestions are very helpful for us to improve the quality and clarity of our paper! We respond to your concerns below:
>**Q1: What noise level is used to get the representation for downstream tasks? Why noisy protein representations can help?**
We clarify that we only introduce noises during pre-training and will use the original protein during fine-tuning, as in denoising pre-training [a,b]. The aim of pre-training is to make the learned representations capture structural and sequential details. We don’t need to introduce additional noise for downstream tasks. We will clarify this point in the revised version.
[a] Zaidi et al. "Pre-training via denoising for molecular property prediction." ICLR, 2023.
[b] Liu et al. "Molecular geometry pretraining with se (3)-invariant denoising distance matching." ICLR, 2023.
>**Q2: Please comment on the size of the latent representation that you use.**
As stated in the Appendix D, our approach employs 128 hidden dimensions for atom-level protein structure tasks and 512 for residue-level tasks. We use a smaller number of hidden dimensions in the atom-level cases so as to have moderate computational cost under the larger-scale atom-level graphs.
>**Q3: Are the performance improvements on downstream tasks statistically significant?**
Thanks for the question. To validate whether the improvements are statistically significant, we conduct a one-tailed t-test between our methods and the second best pre-training baselines on tasks that our method is the best at. We repeat the experiment with five different seeds and show the results in Table A.
Table A: Statistical significance test results.
|#Task|#Method|p-value|t-statistics|
|:----:|:----:|:----:|:----:|
|PIP (atom)|SiamDiff v.s. Residue Type Prediction|0.001|6.78|
|RES (atom)|SiamDiff v.s. Residue Type Prediction|$5.0 \times 10^{-5}$|15.54|
|PSR mean $\rho$ (atom)|SiamDiff v.s. Distance Prediction|0.003|5.11|
|MSP (atom)|SiamDiff v.s. Distance Prediction|0.04|2.33|
|EC auprc (residue)|SiamDiff v.s. Multiview Contrast|>0.1|-|
|PSR global $\rho$ (residue)|SiamDiff v.s. Residue Type Prediction|0.01|3.68|
|PSR mean $\rho$ (residue)|SiamDiff v.s. Residue Type Prediction|0.01|3.62|
It can be observed that for all tasks except EC, the performance improvements are statistically significant under a p-value less than 0.1 (i.e., with t-statistics surpassing the critical value of the corresponding test). We will include the test results in the camera ready version and acknowledge that the improvements on EC are not statistically significant.
>**Q4: Are the conformers derived by side chain perturbation “biologically relevant”?**
Please see the global response for details.
>**Q5: Sec. 3.4 is showing results even before all methods are explained.**
For Sec. 3.4, we want to emphasize the contribution of the proposed two-stage noising scheme as a part of our method and **the results are necessary to support our claims in Sec. 3.4**. First, we provide intuitive insights into the challenges of denoising sequence and structure with varying noise levels, emphasizing the advantages of two-stage noise scheduling. To validate these points, we examine structure denoising loss and sequence denoising accuracy across diverse noise levels and schedules, effectively showcasing our two-stage noise scheduling approach.
>**Q6: How can you get both sequence and structure representations using GearNet-Edge?**
In the paper, we have mentioned in the first footnote that the structure encoder refers to those that take both sequences and structures as input. Notably, GearNet-Edge employs sequential edges, linking consecutive residues in a protein sequence. In this way, message passing is performed according to the protein sequence, and thus GearNet-Edge can extract sequence representations. In another way, spatial and KNN edges are constructed to represent the protein structure, which enables GearNet-Edge to extract structure representations. By combining these different types of edges, GearNet-Edge extracts both sequence and structure representations.
>**Q7: It would help to include some intuitive graphical results.**
Please see the global response for details.
>**Q8: A related work of protein structure-sequence co-design [a] is not referred to.**
This work is closely related to ours in terms of performing sequence-structure joint diffusion. However, our work focuses on learning informative protein representations, while this work focuses on protein generation. We will discuss these connections and differences in the revision.
[a] Lisanza, et al. "Joint generation of protein sequence and structure with RoseTTAFold sequence space diffusion." bioRxiv, 2023.
>**Q9: The metric “Mean Rank” does not add much to all other results.**
We would like to argue that **“Mean Rank” is a commonly used metric for benchmarking methods on a diverse set of tasks**. As shown in Tables 1 and 2, some baseline methods perform well (even the best) on one or two tasks while cannot perform consistently well on all tasks. Since it is hard to measure such performance consistency using individual task-specific metrics, we introduce “Mean Rank'' as a metric for measuring performance consistency across tasks. Based on this metric, we can observe that DiffPreT and SiamDiff are the only two methods that perform consistently well on all tasks.
>**Q10: The discussion on computational cost is missing.**
Regarding computational cost, the protein encoder remains the main bottleneck, involving linear/quadratic cost due to message passing among atoms in the graph. Perturbing/recovering protein sequences and structures carries linear complexity relative to atoms/residues, significantly cheaper than the protein encoder. We will add a paragraph in the method section to discuss the computational cost introduced in pre-training.
---
Rebuttal Comment 1.1:
Comment: First of all, I would like to thank the authors for their detailed rebuttal and the work they put into generating new, interesting results. All my comments were addressed in a concise manner, and all newly reported results and visualizations add to the value of the paper. Given that the authors will improve the clarity of the paper to avoid misunderstandings for future readers, I will happily recommend this paper to get accepted and increase my score (5 $\rightarrow$ 7).
One additional question out of curiosity: I really like the discussion you provided for Figure 4 of the new results, can you also discuss the low-noise outliers (i.e. the small dark blue islands)?
---
Reply to Comment 1.1.1:
Comment: Thanks for your resopnse! We'll follow your suggestions to revise our paper in the final version.
The presence of low-noise outliers first becomes noticeable in Fig. 4(B) and is more evident in Fig. 4(C). These outliers represent proteins with minimal noise. Through pre-training on small noise, our model successfully differentiates between proteins without noise and those with minimal noise. This implies that our model can detect even subtle perturbations in the protein, an observation that might seem intuitive given the number of masks within the protein. Nonetheless, further investigation in higher dimension is needed to understand why several low-noise outliers appear, which we will continue to work on. | Summary: In this work, the authors perform a thorough investigation of different pre-training strategies on joint sequence-structure diffusion models for representation learning, rather than generative modeling. They evaluate on an EC prediction task and four tasks from the Atom3D benchmark. They find that joint sequence-structure pre-training with the SiamDiff method for maximizing mutual information between diffusion trajectories of protein conformers consistently outperforms other methods on downstream task evaluation.
Strengths: Great observation that prior pre-training strategies excel for particular tasks and have shortcomings in others; and while SiamDiff does not yield huge improvements over other strategies, it does consistently outperform them across tasks.
This work takes a thorough and critical look at the benefits and tradeoffs of considering sequence- or structure-only methods, and systematically evaluates different pre-training approaches.
Weaknesses: Lack of evaluation against other downstream predictors and backbone models for downstream prediction.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The sizable gap in sequence denoising accuracy for sequence vs joint diffusion is very interesting, especially considering that in the ablation, the model without structure diffusion is competitive with the others. Can the authors comment on the additional complexity of joint diffusion compared to sequence-only diffusion, and why this accuracy gap is so pronounced? The authors mention model capacity, but clearly capacity is sufficient for downstream tasks.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments and great suggestions! We respond to your concerns below:
>**Q1: Lack of evaluation against other downstream predictors and backbone models for downstream prediction.**
**We would like to argue that the focus of the paper is the development and evaluation of new pre-training algorithms, not achieving state-of-the-art performance on a specific task.** Therefore, we constantly use GearNet-Edge as the backbone model and 2-layer MLP as the prediction head so as to **have a fair comparison among different pre-training methods**. We also try GVP as the backbone model in Appendix H, to show that our methods can be applied on different encoders. Indeed, it is interesting to study the performance of different pre-training methods under other backbone models like ProNet [a] and CDConv [b]. Since the response period is short, we will try our best to finish these studies and add them to the future paper version.
[a] Wang, Limei, et al. "Learning hierarchical protein representations via complete 3d graph networks." ICLR, 2022.
[b] Fan, Hehe, et al. "Continuous-Discrete Convolution for Geometry-Sequence Modeling in Proteins." ICLR, 2022.
>**Q2: In the ablation, the model without structure diffusion is competitive with the others. What’s the additional complexity of joint diffusion against sequence-only diffusion?**
First, we want to argue that structure diffusion is an important component of SiamDiff. According to Table 3, the baseline without structure diffusion does not perform well on structure-informed tasks like PIP, MSP and PSR. We also perform ablation studies on residue-level tasks during rebuttal, the results of which are shown in Table A. Based on these results, we can conclude that structure diffusion brings consistent improvements across all considered tasks, which proves its necessity.
Table A: Ablation study on residue-level tasks.
|#Method|EC||MSP|PSR||
|:----:|:----:|:----:|:----:|:----:|:----:|
||AUPR|$F_{\max}$|AUROC|Global $\rho$|Mean $\rho$|
|**SiamDiff**|**0.878**|**0.857**|**0.700**|**0.856**|**0.521**|
|w/o structure diffusion|0.868|0.850|0.671|0.826|0.502|
In terms of complexity, it should be noted that the bottleneck of computation still lies in the protein encoder. The encoder requires message passing between protein atoms and introduces linear (or quadratic) cost with respect to the number of edges in the protein graph. The process of perturbing and recovering protein structures only requires linear complexity with respect to the number of atoms/residues in the protein, which is similar to sequence diffusion and is cheap compared with the protein encoder. Therefore, it is good to include structure diffusion considering its benefits and little cost.
>**Q3: Why is the gap of sequence denoising accuracy between joint diffusion and sequence-only diffusion so pronounced?**
Thanks for the question! It is an interesting observation that sequence diffusion only can achieve around 0.8 recovery accuracy, while using joint diffusion will decrease the accuracy to 0.4. As explained in the paper, the phenomenon can be attributed to the introduction of large structural noise. The large effect of structural noise implies that the sequence recovery may rely on the backbone conformation of predicted residues (note that we remove the corresponding side chain atoms to avoid information leakage). After perturbing the backbone structures, it will be much more difficult to infer the residue types. This hypothesis can be validated by the phenomenon observed on residue-level pre-training: when using residue-level SiamDiff, we only keep the alpha Carbon atoms instead of all three backbone atoms. The sequence diffusion can only achieve around 0.4 recovery, while the joint diffusion will decrease the accuracy to 0.3, which has a much smaller gap. Therefore, we conclude that the large gap is because atom-level sequence denoising is *too easy* with the correct backbone information, while introducing structural noise brings difficulty to the pre-training task.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and thoroughly enjoyed the response! I particularly appreciate the authors' point that "the focus of the paper is the development and evaluation of new pre-training algorithms, not achieving state-of-the-art performance on a specific task". This is a refreshing focus and the paper should not be penalized for this. Additionally, I found the discussion on the impacts of joint diffusion on sequence recovery to be clarifying. I will happily increase my score and recommend acceptance of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback and support! We will incorporate a discussion regarding the effects of joint diffusion on sequence recovery in the final version. We would be grateful if you could consider increasing your score. | Summary: The paper proposes to use joint protein sequence and structure diffusion as a pretraining task, which they call DiffPreT. In order to account for the fact that proteins can exist as ensembles of conformers, the paper further proposes to generate pairs of conformers, use the diffusion forward process to corrupt them, and then train the reverse process across diffusion trajectories. They call this pretrained model SiamDiff. The paper then shows that DiffPreT and SiamDiff are effective on a panel of protein function prediction tasks.
Strengths: The paper addresses the important problem of protein function prediction using novel and effective methods. Using diffusion as a pretraining task is novel, interesting, and apparently quite effective. The SiamDiff objective is creative and also empirically effective. The experiments are well-done and convincing. The writing is clear, and the paper is easy to follow. I appreciate the uncertainties on metrics and the attempt to benchmark across a wide variety of tasks. Using two-stage noise-scheduling is also an intuitive way to adapt a model framework originally meant for generation to pretraining.
Weaknesses: ### Major
In general, this is a very good paper that clearly describes a promising method for protein pretraining. However, it only evaluates on Atom3D and EC number. The paper would be much more significant if they evaluated GearNet on zero-shot fitness tasks such as those in ProteinGym, protein engineering tasks such as those in FLIP, and more general structure prediction tasks such as those used in CAFA.
In addition, the results would be more general if the paper considered different diffusion schemes. For structure generation, diffusion on [frames](https://arxiv.org/abs/2301.12485) or [angles](https://arxiv.org/abs/2209.15611) seems to outperform diffusion directly on atoms. In sequence diffusion, D3PM with an absorbing state isn't as computationally efficient as [autoregressive diffusion](https://arxiv.org/abs/2110.02037) while not allowing iterative refinement like D3PM with a uniform prior over amino acids. The paper would be slightly stronger if it discussed these choices, and much stronger if it evaluated the effect of these choices. However, I realize that this is probably not feasible during the discussion period!
In general, I think the exposition of the diffusion model could be shortened, as that is not the main contribution of the paper, in order to make more space for contributions that would increase the paper's significance.
### Minor
- The paper should include ablations on EC
- It's not clear to me why ESM-2-650M-GearNet doesn't require structure as input.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - How is EC a residue-level task? Don't the labels apply to the entire protein?
- Is there a way to use this model on proteins without a structure?
- How does the choice of diffusion process effect performance?
- How well does this work on function prediction, zero-shot fitness prediction, or protein engineering tasks?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: - SiamDiff requires structures at inference time.
- The paper only considers one diffusion process out of many possible choices.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your appreciation of our work! We respond to your questions and concerns below:
>**Q1: The evaluation on more types of downstream tasks would make this paper more significant.**
Thanks for the suggestion. We believe this additional experiment in the global response showcases the potential of SiamDiff on protein engineering tasks. However, we acknowledge that more effort is required when considering more complicated settings, e.g., insertion and deletion. Also, task-specific designs need to be explored to apply structure-based models for zero-shot fitness prediction. Due to the limited time in the rebuttal period, we leave these explorations as future work.
>**Q2: Different structure and sequence diffusion schemes are not thoroughly explored in the current draft.**
Thanks for pointing out this important aspect!
For structure diffusion, we resort to **diffusion on coordinates** for its **effectiveness on learning molecular representations that has been proven in recent small molecule pre-training works [a,b]**. We agree that **diffusion on amino acid frames and bond/torsion angles better fit the inductive biases of protein structure generation** as shown in FoldingDiff [c] and RFdiffusion [d]. It is intriguing to study the effectiveness of these structure diffusion schemes on protein representation learning. We will definitely investigate them in our future work.
For sequence diffusion, Autoregressive Diffusion Models (ARDMs) [e] are equivalent to the infinite time limit of Discrete Denoising Diffusion Probabilistic Models (D3PMs) [f] that are used in our current work. Therefore, **ARDMs are maximally expressive and can potentially enhance the effectiveness of our proposed DiffPreT and SiamDiff methods.** To explore this possibility, we conducted an initial experiment during the rebuttal by replacing D3PMs in SiamDiff with ARDMs. The results are presented in Table A.
Table A: Comparison between SiamDiff with D3PMs and ARDMs.
|#Method|PIP|MSP|RES|PSR||
|:----:|:----:|:----:|:----:|:----:|:----:|
||AUROC|AUROC|Acc.|Global $\rho$|Mean $\rho$|
|**SiamDiff**|**0.884**|**0.698**|**0.460**|**0.829**|**0.546**|
|w/ ARDM|0.883|0.640|0.450|0.828|0.533|
As shown in Table A, the results of ARDMs are quite close to those of SiamDiff on some tasks, but generally fall short in performance. Our hypothesis is that the advantages of ARDMs in the original paper stem from their random assigned order and causal masking in Transformers. The causal masking scheme poses challenges for GNN-based encoders. In our implementation, we simply remove edges that do not adhere to the assigned order. Further investigation is needed to explore how to adapt ARDMs to proteins, and this aspect is left for future work.
[a] Zaidi, Sheheryar, et al. "Pre-training via denoising for molecular property prediction." ICLR, 2023.
[b] Liu, Shengchao, Hongyu Guo, and Jian Tang. "Molecular geometry pretraining with se (3)-invariant denoising distance matching." ICLR, 2023.
[c] Wu, Kevin Eric, et al. "Protein structure generation via folding diffusion." arXiv, 2022.
[d] Watson, Joseph L., et al. "De novo design of protein structure and function with RFdiffusion." Nature, 2023.
[e] Hoogeboom, Emiel, et al. "Autoregressive diffusion models." ICLR, 2022.
[f] Austin, Jacob, et al. "Structured denoising diffusion models in discrete state-spaces." NeurIPS, 2021.
>**Q3: The paper should include ablations on EC.**
Thanks for the suggestion. We agree that including the ablation on EC will provide a better understanding of components of SiamDiff on residue-level tasks. However, this is infeasible during the rebuttal period, due to the large amount of computational resources needed (5 pre-training baselines * 3 repeated experiments, each with 4 GPUs and 24 hours fine-tuning). We will run the experiments and add the ablation study in the final version.
>**Q4: It’s not clear why ESM-2-650M-GearNet doesn’t require structure as input.**
We would like to clarify that what we state in the paper is ESM-2-650M (the sequence encoder part of ESM-2-650M-GearNet) can only extract representations from protein sequences and cannot take protein structures as input, making it unsuitable for structure-related tasks in Atom3D. **ESM-2-650M-GearNet can definitely take protein structures as input and extract structural representations by its GearNet component.** We will make this point clearer in the revision.
>**Q5: How is EC a residue-level task? Don’t the labels apply to the entire protein?**
We clarify that the “residue-level” here means that **residue-level structures are used for task prediction, instead of predicting per-residue labels**. Therefore, the labels of the EC task are applied to entire proteins, and we use residue-level protein structures to predict such per-protein labels. We will state this point more clearly in the revised version.
>**Q6: Is there a way to use this model on proteins without a structure? SiamDiff requires structures at inference time.**
As joint pre-training methods upon protein sequences and structures, DiffPreT and SiamDiff require protein structures as input for model prediction. Given the recent advancement in protein structure prediction, it can be easy to obtain accurate structures for most proteins, though with additional inference time. However, we argue that in most protein-related tasks, the importance of accuracy outweighs that of inference speed, including protein function and protein-protein interaction prediction. Besides, even for high-throughput tasks dominated by sequence-based methods like protein engineering, structure-based methods can be easily applied with little extra effort and large improvements, as shown in the GB1 experiment in the global response. Hence, we argue that the requirement of protein structures should not be a downside of our method.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: I am impressed with the additional experiments the authors managed to do during the rebuttal period and think that they increase the contribution of the paper. However, on GB1 2-vs-rest, there are some stronger baseline results in Table 3 here: https://www.biorxiv.org/content/10.1101/2022.05.19.492714v4.full.pdf
In general, I think this is a strong and interesting paper that deserves acceptance.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thanks for bringing our attention to that work! We'll include that baseline in our final version. We acknowledge that these protein engineering tasks are very interesting and promising to study. We'll work on that with structure-based methods in the future. | Rebuttal 1:
Rebuttal: We extend our gratitude to all reviewers for valuable feedback. We’ve made significant improvements based on your suggestions. Here is a brief summary of important points:
>**New benchmark results on protein engineering task (Reviewer MBi3)**
During the rebuttal period, we followed your suggestion to include the GB1 dataset from FLIP in experiments. As this is a protein engineering task with mutated sequences, we assume that *the backbone structure remains unchanged after mutation*, to save costs in generating mutant structures. We only keep CA atoms in the wild type protein structure as the input to the encoder. We benchmark residue-level methods in Table 11 in the attached file, alongside CNN and ESM-1b baselines from the FLIP paper.
According to Table 11, we observe that modeling structural information is beneficial compared with using only sequential information, even under the assumption that all mutants share the same backbone structure. **Among all pre-training methods, SiamDiff demonstrates the most significant improvements over the baseline, once again validating the effectiveness of our method.**
>**Biological relevance of random torsional perturbation scheme (Reviewer Qtp8, E714)**
First, we reiterate that random side-chain perturbation is commonly used for simulating conformers [a,b]. Moreover, removing structures with clashes ensures that generated conformers are physically plausible. So the generated conformer distribution is biologically relevant.
Methodologically, while adding torsional noises without changing the backbone when generating conformers, Gaussian noises are introduced to the backbone during forward diffusion. This makes our encoder capture both backbones and side-chain noises effectively.
Besides, in pre-training, highly realistic conformers aren’t vital for better representations. To confirm, an extra rebuttal experiment (Table 12 in attached file) is performed. Instead of random perturbation, we sample from a rotamer library [c] based on residue types and backbone angles. Table 12 shows random torsional perturbation still outperforms sampling from a rotamer library in most tasks, confirming our hypothesis. This can be attributed to the fact that the objective of pre-training is to learn common information between diverse views through mutual prediction, as SimCLR and SimSiam. Considering this perspective, introducing random torsional noise allows us to generate more diverse conformers compared to solely relying on realistic conformer distributions.
In summary, while random torsional perturbation may not be as realistic as rotamer library-based or force field-based methods, it holds **biological relevance, is easy to implement, and proves to be a practical pre-training choice due to performance advantages**.
[a] Ho et al. "Probing the flexibility of large conformational changes in protein structures through local perturbations." PLoS computational biology, 2009.
[b] Ho et al. "Conserved tertiary couplings stabilize elements in the PDZ fold, leading to characteristic patterns of domain conformational flexibility." Protein Science, 2010.
[c] Shapovalov et al. "A smoothed backbone-dependent rotamer library for proteins derived from adaptive kernel density estimates and regressions." Structure, 2011.
>**New visualization results (Reviewer Qtp8)**
We have added visualization results in the attached pdf file. To explore pre-training insights, we visualize UMAP representations of 4 random AlphaFold DB proteins in Fig. 4.
Several interesting phenomena can be observed:
1. *Randomly initiated representations* in Fig. 4(A) form a clear, continuously color-changing trajectory (blue to red). **This confirms that the forward diffusion process gradually adds noise to proteins, leading to smooth changes in their representations, as expected for diffusion models.**
2. After *pre-training with large noise scales*, the encoder maintains the color smoothness of the trajectory, which is desired for effective denoising during the backward diffusion process. Intriguingly, pre-training narrows the trajectory compared to the broader trajectory without pre-training, particularly at the two ends. **This suggests that first-stage pre-training clusters proteins with similar levels of added noise, even for large and diverse noises.** This clustering property proves useful for detecting large perturbations in downstream tasks, such as mutation stability prediction in MSP, as opposed to the diverse representation distributions in Fig. 4(A).
3. Continuing with *small noise scale pre-training*, the trajectory becomes much narrower in the middle and even breaks for some proteins. **This indicates that by focusing on only slightly perturbed samples during pre-training, our model becomes capable of discerning proteins with small and large noises, making it more effective for fine-grained downstream tasks like PSR and PIP.** However, the red end of the trajectory is thicker than that in Fig. 4(B), which may imply some forgetting behavior in the second-stage pre-training.
>**Paper presentation (Reviewer MBi3, Qtp8, E714)**
We acknowledge the concerns raised by the reviewers regarding the paper's presentation. Due to the extensive methodology and experimental contribution, it was challenging to fit everything within the 9-page limit. In the submitted version, we dedicated a lengthy section to explaining the fundamental concepts, making it more accessible to readers unfamiliar with diffusion models on proteins. However, we recognize that many intriguing observations and experiments were relegated to the appendix. We believe this problem can be addressed in the camera ready version with an additional page limit. We plan to reorganize our paper based on your suggestions, including the shortening of Sec. 3 and the addition of more details in Sec. 4, merging Sec. 4.4 into the Related Work section, and providing necessary descriptions about the tasks and experimental settings.
Pdf: /pdf/a68286f62d770857589f6a5b349b8c0df5da0b68.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Latent Graph Inference with Limited Supervision | Accept (poster) | Summary: The authors figure out that the graph sparsification operation results in the supervision starvation problem in latent graph inference (LGI). They propose to identify k-hop starved nodes and diminish the starved nodes by incorporating a regularization adjacency matrix into the initial one. They further reduce the computational cost by using CUR matrix decomposition and tackle the weight contribution rate decay problem via some simple strategies. The effectiveness of the proposed method is validated on well-known benchmarks.
Update after rebuttal:
The authors have addressed my concern through the rebuttal. The score remains unchanged.
Strengths: 1. This paper is well-written and easy to follow.
2. This paper identifies the supervision starvation problem for LGI.
3. The proposed method is well-motivated and theoretically justified.
4. The proposed method is model-agnostic and can be seamlessly integrated into various LGI models.
5. The empirical results are significant on benchmarks.
Weaknesses: From the analysis in Section 4.2, the proposed method may be sensitive to the hyper-paramters $\tau$ and $\alpha$.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In Eq. (4), the refined matrix $\widetilde{\mathbf{A}}$ becomes a weighted adjacency matrix when the positive parameter $\alpha \neq 1$. Will tuning $\alpha$ help with methods developed for unweighted graphs?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We really appreciate that the Reviewer identifies our contributions and provides constructive comments. We address your concerns as below**.
**W 1: Parameter sensitivity**.
Thank you for this valuable comment. In Sec. 4.2, we discussed how $\tau$ and $\alpha$ affect performance. In our method, $\tau$ determines the number of labeled nodes that will be connected by each starved node, and $\alpha$ balances the contribution between the initial adjacency matrix and the regularization one. As we can see from Table 3, our proposed method is not very sensitive to $\tau$. For different baselines, our method can obtain the best performance when $\tau$ belongs to $\lbrace 25, 30 \rbrace$. Therefore, we can set the same $\tau$ for different baselines. From Table 4, we found that the performance is relatively sensitive to $\alpha$. In fact, this is because the value of $\alpha$ depends more on the baseline we used. For GCN+KNN, a larger $\alpha$ leads to better performance. The potential reason is that the pre-constructed graph in GCN+KNN cannot be optimized. Therefore, enlarging $\alpha$ will make the regularization graph contribute more to the final loss and achieve more improvement. For GRCN and SLAPS, since the initial graph is revised by node embeddings or updated by self-supervision signals, a moderate value of $\alpha$ needs to be set to achieve a balance between the initial graph and the regularization one. We will add more discussion on this point in the final version.
**Q 1: Unweighted graphs**.
Thank you for asking this constructive question. In fact, all the latent graph inference methods generate weighted graphs with real-valued edge weights (the matrix $A$ is real-valued) from their graph generators. Therefore, $\widetilde A$ is always a weighted adjacency matrix, no matter how we set the value of $\alpha$. The values of edge weights are basically calculated based on different distance functions designed by different baselines. Typically, the smaller the distance between nodes, the larger the corresponding edge weights. To test whether turning $\alpha$ helps for unweighted graphs, we use SLAPS as an example to conduct more experiments. Specifically, we change the original graph generator of SLPAS to generate an unweighted graph and test its performance on the Pubmed dataset. Then, we tune the value of $\alpha$ and test the corresponding performance of the SLPAS_U and SLAPS_R methods. The experimental results are listed below. As we can see, for unweighted graphs, our method can still improve the baseline of SLPAS. However, the performance improvement is not sensitive to the value of $\alpha$ since the learned graphs are unweighted.
| **$\alpha$** | **baseline(0)** | **0.01** | **0.1** | **1.0** | **10** | **50** | **100** |
|-------|-------------|------|---------|------|----|----|-----|
| **SLAPS_U** | 66.02 ± 1.47 | 68.26 ± 1.23 | 68.68 ± 1.04 | 68.72 ± 1.30 | 68.72 ± 1.30 | 68.72 ± 1.30 | 68.72 ± 1.30 |
| **SLAPS_R** | 66.02 ± 1.47 | 68.28 ± 1.20 | 68.34 ± 0.85 | 68.64 ± 1.21 | 68.64 ± 1.21 | 68.64 ± 1.21 | 68.64 ± 1.21 |
---
Rebuttal Comment 1.1:
Comment: Thanks for your further analysis of parameter sensitivity and additional experiments on unweighted graphs. I have no further questions.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you very much for your quick response and valuable comments. We will try our best to revise our manuscript accordingly. Aagin, thank you so much for helping us improve the paper! | Summary: The paper points common LGI methods suffer from the issue of `supervision starvation`.
It also observes this issue is actually caused by the graph sparsification operation.
To address this problem, the paper proposes to restore the corrupted affinities and replenish the missed supervision.
It presents the `CUR matrix decomposition` to reduce the computational burden and eliminates the starved nodes by reconstructing the destroyed connections
The method is model-agnostic and can be seamlessly integrated into existing LGI methods.
Extensive experiments show promising results.
Strengths: 1) Originality.
The identifies graph sparsification is the main cause of the supervision starvation (SS) in LGI.
The paper is solid in analyzing problems with insights.
2) Quality.
This method is supported by both theoretical (Theorem 1, 2) and experimental aspects, having strong persuasiveness.
3) Clarity.
The paper is well written and organized.
4) Significance.
LGI is a common task graph learning. This method can be potentially applied in many situations.
Weaknesses: 1) The paper should clearly explain why using `CUR Decomposition` is more efficient.
The experimental results only show a higher accuracy, but does not reflect the efficiency.
So this advantage is not well supported.
2) The section3.2 and 3.3 are not easy to follow.
The reviewers and potential readers may prefer a popular version.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) The $L_{reg}$ should have a clear mathematical form in paper in Equal 9.
2) The authors are encouraged to discuss more about SoTA GNNs [15,23,37,40] since the proposed approach is model-agnostic.
The reviewer is curious why these methods were mentioned in Section1 but not compared in experiments.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The LGI is a common task in graph learning and all experiments are conducted on public and popular datasets.
The reviewer see no negative societal impact and limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We really appreciate that the Reviewer recognizes our contributions and originality, and gives us useful suggestions. We give our response below**.
**W 1: Why CUR Decomposition is more efficient**.
Thank you for this important comment. As stated in Lines 48-53, Sec. 1, when we say “more efficient”, we mean that using CUR Decomposition (the method proposed in Sec. 3.2) to identify the starved nodes is more efficient in comparison with the method based on the k-th power of a given adjacency matrix (the method proposed in Sec. 3.1). To support this point, we conduct more experiments to compare the time performance of these two methods. The experimental results on running time are listed below. As we can see, using CUR Decomposition is more efficient than the method based on the k-th power of the adjacency matrix, especially on the larger Pubmed dataset.
| **Datasets** | **Cora140** | **Cora390** | **Citeseer120** | **Citeseer370** | **Pubmed** |
|-----------|--------------|------------|---------------|--------------|----------------|
| **k-th power of adjacency matrix** | 1.10 | 1.05 | 1.65 | 1.67 | 296.81 |
| **CUR Decomposition** | 0.21 | 0.22 | 0.35 | 0.36 | 11.68 |
**W 2: Revise sections 3.2 and 3.3**.
We thank the Reviewer for pointing this out. To better illustrate Theorem 2, we provide a simple example in the supplementary material. In addition, we will double polish these sections to make them easy to follow.
**Q 1: Clear form of L_reg**.
Thank you for this helpful suggestion. As stated in Sec. 3.4, different LGI methods adopt different graph regularization loss $L_{reg}$. For example, IDGL adopts the following Dirichlet energy as the graph regularization loss [4]:
$$L_{reg} = \frac{1}{2n^2} \sum_{i,j}A_{ij}||x_i-x_j||^2 = \frac{1}{n^2} tr(X^TLX).$$
SLAPS designs the following denoising autoencoder loss as the graph regularization loss [9]:
$$L_{reg} = F(X_{idx}, GNN_{DAE}(\widetilde X, A; \theta_{GNN_{DAE}})_{idx}).$$
We will add clear mathematical forms for these different regularization losses and give more discussion in the final version.
**Q 2: Discuss more about SOTA GNNs**.
We thank the Reviewer for this comment. We think [15][23] cannot be regarded as latent graph inference methods since they do not infer latent graphs solely based on the node features. More specifically, [15] requires a prior graph as the input and then modifies the prior graph structure based on the sparsity and low-rank assumptions. The main goal of this method is to tackle the adversarial attack problem rather than infer a good latent graph from scratch. [23] aims to solve the graph comparison problem based on graph kernels and graph neural networks. This method also does not involve the inference of latent graphs. Besides, [37] proposes a robust similarity measure based on the B-Attention mechanism for multiple clustering and ReID tasks, and [40] designs a Transformer model that scales all-pair message passing to large node classification graphs. These two methods do not involve the graph sparsification operation and thus do not encounter the supervision starvation problem. In summary, these methods [15, 23, 37, 40] are different from the latent graph inference methods used in our experiments. We will discuss more about these methods in the final version.
---
Rebuttal Comment 1.1:
Title: Additional Comments
Comment: Thanks for your response.
I still believe that these SoTA methods should be compared, at least one of them, instead of just talking.
At least an experiment result of them should be put on to show their shortcomings, Otherwise, I would think the experiment is insufficient.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply!
Comment: Dear Reviewer se4r,
Thank you very much for your reply and instructive suggestions. We are doing the experiments you requested now. We will try our best to show the experimental results before the discussion period.
Thanks again for your help in improving our paper!
Best regards,
Authors
---
Reply to Comment 1.1.2:
Title: New experimental results.
Comment: Thank you again for your constructive suggestions. We add one more SOTA method, Pro-GNN [15], for experiments. We use their source codes and adopt the same data partitioning. In fact, Pro-GNN does not infer a latent graph from scratch based on the node features. It requires a prior semantic graph as the input. To align with our latent graph inference task, we construct a KNN graph as the prior graph for Pro-GNN. The experimental results on the Cora and Citeseer datasets are shown below (it cannot be trained on other datasets directly due to the out-of-memory issue). As we can see, our proposed methods still achieve better performance.
| Datasets | Cora | Citeseer |
| --- | --- | --- |
| Pro-GNN | 70.73 ± 0.35 | 71.76 ± 0.14 |
| Pro-GNN_U (ours) | 70.76 ± 0.32 | 71.86 ± 0.95 |
| Pro-GNN_R (ours) | 70.85 ± 0.39 | 72.18 ± 0.33 |
We hope this can address your concern. And again, thanks for helping us improve the quality of our paper. | Summary: The paper proposes a method for latent graph inference (aka graph structure learning) based on the idea of mitigating the supervision starvation problem present when jointly learning the underlying graph structure and node representations. The paper claims that the supervision starvation problem is caused by the sparsifier operation present in many existing models in the literature. It further proposes to recover (some) starved edges to resolve the issue and shows that reducing the number of starved edges consistently improves the performance on multiple benchmarks.
Strengths: The problem of latent graph inference and the supervision starvation problem present in many existing approaches are important directions that are recently gaining more and more attention. The paper proposes an approach to identify the starved edges and add them back to the graph structure that is backed up by CUR decomposition and theoretical results. The final analysis shows the proposed strategy improves the performance of state-of-the-art LGI methods. The method is shown to be more effective when supervision is less.
Weaknesses: W1: One main weakness of the model is that the results are obtained using the average of the top five best testing performances. I understand that this setup was fixed across multiple baselines and datasets but to measure the generalization of the model, I would expect the results to be an average over testing performance of the models corresponding to the best validation performance. This is especially important as the charts in Figure 2 show a lot of instabilities in the test loss of multiple models.
W2: the proposed model adds some edges to the graph structure and compares itself to the base model without those edges. I suggest adding a baseline as the base model + random edges to make sure the performance obtained is not just because there are more edges present.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1: Can you elaborate more on what models M_U and M_R are referring to? My understanding so far is that M_U is referring to Equation 6 and M_R is referring to Equation 8.
Q2: Can you explain what edge weights have been used on both existing and starved edges? Is this the real-valued edge from the output of the graph generator for both type of edges?
Q3: What type of graph generators has been used to obtain the results in Table 1?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: One limitation of the model is that Theorem 2 only works for k of 1 and 2. For many academic datasets, using one or two layers of GNNs is enough. However, in many real-world applications, deeper GNNs are needed to send the long-range dependencies. The paper does not provide any insights on how the idea can be extended for higher values of ks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We really appreciate the Reviewer’s valuable comments. We address your concerns as below.**
**W 1: Best validation performance**.
We thank the Reviewer for this important suggestion, and we agree that the setting you mentioned is more reasonable. *To show the results of the models corresponding to the best validation performance, we conduct more experiments on the Pubmed dataset, and the experimental results are listed below*. As we can see, our methods still improve the baselines, which further indicates their effectiveness. Besides, we think the instability phenomenon depends on the baselines we used. For GRCN, its curves are stable. Although GCN+KNN and SLPAS show some instabilities in the loss and accuracy curves, *for all the methods, the testing accuracy curves of our models lie above the baselines*, further demonstrating the effectiveness of our models.
The performance of the models corresponding to the best validation performance:
| | | | | | |
|---|---|---|---|---|---|
| **GCN+KNN** | 67.28 ± 0.38 | **GRCN** | 68.62 ± 0.19 | **SLAPS** | 74.50 ± 0.33 |
| **GCN+KNN_U** | 72.76 ± 0.29 | **GRCN_U** | 72.30 ± 0.95 | **SLAPS_U** | 76.00 ± 0.41 |
| **GCN+KNN_R** | 72.98 ± 0.31 | **GRCN_R** | 72.14 ± 0.79 | **SLAPS_R** | 76.86 ± 0.83 |
The performance of the models corresponding to the best testing performance:
| | | | | | |
|---|---|---|---|---|---|
| **GCN+KNN** | 68.66 ± 0.05 | **GRCN** | 69.24 ± 0.20 | **SLAPS** | 74.86 ± 0.79 |
| **GCN+KNN_U** | 74.12 ± 0.32 | **GRCN_U** | 72.80 ± 0.99 | **SLAPS_U** | 76.74 ± 0.59 |
| **GCN+KNN_R** | 74.78 ± 0.17 | **GRCN_R** | 72.82 ± 1.03 | **SLAPS_R** | 77.12 ± 0.77 |
**W 2: Random eges**.
Thank you for this constructive suggestion. We add a baseline as the base model + random edges and conduct more experiments. The GCN+KNN and SLAPS methods are selected for experiments since they can work on all datasets. The number of random edges depends on the number of added edges in our methods. Following the experimental setting, we conduct five independent experiments and report the average accuracy along with the corresponding standard deviation as below. From the results, *we observe that adding random edges cannot guarantee performance improvement*. Therefore, we can make sure that the performance obtained by our methods is not just because there are more edges present.
| **Datasets** | **ogbn-arxiv** | **Cora390** | **Cora140** | **Citeseer370** | **Citeseer120** | **Pubmed** |
|---|---|---|----|---|---|---|
|**GCN+KNN** | 55.15 ± 0.11 | 72.82 ± 0.39 | 67.94 ± 0.29 | 73.28 ± 0.23 | 69.68 ± 0.53 | 68.66 ± 0.05 |
|**GCN+KNN_RandEdge** | 54.31 ± 0.15 | 72.68 ± 0.13 | 67.76 ± 0.33 | 72.34 ± 0.29 | 68.86 ± 0.76 | 68.46 ± 0.16 |
|**GCN+KNN_U (ours)** | 55.82 ± 0.11 | 72.82 ± 0.21 | 68.18 ± 0.44 | 73.68 ± 0.10 | 69.74 ± 0.54 | 74.12 ± 0.32 |
|**GCN+KNN_R (ours)** | 55.86 ± 0.10 | 72.92 ± 0.28 | 68.12 ± 0.48 | 73.66 ± 0.14 | 69.90 ± 0.68 | 74.78 ± 0.17 |
|**SLAPS** | 55.46 ± 0.12 | 76.62 ± 0.83 | 74.26 ± 0.53 | 74.32 ± 0.56 | 70.66 ± 0.97 | 74.86 ± 0.79 |
|**SLAPS_RandEdge** | 55.35 ± 0.50 | 76.68 ± 0.56 | 73.44 ± 0.67 | 74.08 ± 0.74 | 71.08 ± 0.81 | 74.10 ± 1.32 |
|**SLAPS_U (ours)** | 55.68 ± 0.09 | 76.94 ± 0.42 | 74.56 ± 0.21 | 74.82 ± 0.27 | 71.68 ± 0.47 | 76.74 ± 0.59 |
|**SLAPS_R (ours)** | 56.11 ± 0.15 | 76.82 ± 0.19 | 75.00 ± 0.49 | 74.90 ± 0.42 | 72.36 ± 0.49 | 77.12 ± 0.77 |
**Q 1: M_U and M_R**.
Thank you for pointing this out. In Line 268, M_U adopts the sparse intersection matrix $\hat U$ as the regularization, while M_Q combines $\hat U$ and $Q$ together. That means M_U refers to Equation 6 with a sparse $U$ (not $\widetilde U$ but $\hat U$, to solve the weight contribution rate decay issue), and M_R refers to Equation 8 with $\hat U$. We will make this statement more clear in the final version.
**Q 2: Edge weights**.
Yes. In fact, all the latent graph inference methods use real-valued edge weights from the output of their graph generators. The edge weights are calculated based on their designed distance functions. The smaller the distance between nodes, the larger the corresponding edge weights. For fairness and simplicity, we also use real-valued weights for the starved edges.
**Q 3: Graph generators**.
In the experiments, the type of graph generators depended on the baselines we used. For instance, in IDGL, the graph generator utilizes a weighted cosine similarity function as the metric function [4]. LCGS uses dual-normalization as the graph construction method [12]. For GRCN, we adopt a kNN graph [43]. And for SLAPS, we use the MLP-kNN generator, which corresponds to a multi-layer perceptron followed by the kNN operation [9].
**L 1: Extend for higher values of k**.
We thank the Reviewer for this valuable comment. As we stated in the supplementary, how to identify k-hop starved nodes for k>2 based on CUR decomposition remains an interesting direction for further investigation. Since we submitted this manuscript, we have been trying to solve this challenge. And now, we successfully come up with a solution. According to the definition of starved nodes, we know that a k-hop starved node also qualifies as a (k-1)-hop starved node. Therefore, we can iteratively identify k-hop starved nodes based on (k-1)-hop starved nodes. We use k=3 for illustration. Suppose that $rowmask$ is a $n$-dimensional mask vector where only the indexes of 2-hop starved nodes are marked as $1$ and $RM_+$ is the set of indexes of positive elements from $rowmask$. Then, we can obtain $R=A[rowmask, :]$. For each row $i$ of $R$, $V_i$ is a 3-hop starved node if, for all $j$ that satisfies $1_{\mathbb{R}^+}(R_{ij}=1)$, $j \in RM_+$. Based on the iterative strategy, we can solve the problem for higher values of ks.
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: Thanks for providing the new result and insights. Most of my concerns are now resolved and I have raised my soundness score and the overall rating accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you very much for raising the score!
Comment: Dear Reviewer tzHX,
We are very pleased that our responses resolved your concerns, and **thank you very much for raising the score!** We will try our best to revise our manuscript accordingly.
Again, thank you so much for helping us improve the paper!
Best regards,
Authors | Summary: This paper proposes a new method for the latent graph inference problem. The motivation of the new method is the existence of supervision starvation nodes caused by graph sparsification operation. To reduce the number of supervision starvation nodes, the authors propose a CUR matrix decomposition based method to add an additional adjacency matrix to the original sparse adjacency matrix with some constraints. The extensive experimental results show that the proposed idea is effective on various datasets and different base models.
Strengths: 1. The paper is well written. It is easy to follow the idea and understand the motivation and methodology design.
2. Although the method is simple, it is effective and with sufficient analysis.
3. The performance looks promising.
Weaknesses: The reason why some nodes may receive no supervision is also because of the only one-layer GNN. If there are multiple layers of GNNs, the supervision information will propagate in the graph to achieve semi-supervised training, which is the core of most of GNNs. If there are multiple layers of GNNs, is that still reasonable to define ‘supervision starvation’? In Line 155, why is deeper GNN not an optimal solution to reduce the starving nodes? From the Figure 1, we can see that increasing GNN to 4 layers could reduce the starving nodes from near 3,000 to only near 500.
Since matrix A is dynamically generated from a latent graph generator, will the U or Q dynamically change as well? Are they dependent on generated A? If so, is that differentiable?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We really appreciate the Reviewer’s approval and constructive comments. We address your concerns as below.**
**W 1**: Thank you for the constructive comments. We respond to this question in the following aspects:
**Reason**. The reason why some nodes may receive no supervision is not because of the only one-layer GNNs. As stated in Line 105, Sec. 2.3, *graph sparsification is the main cause of the supervision starvation problem in LGI*. In other words, graph sparsification leads to some nodes being unable to receive supervision signals. Although increasing the number of layers of GNNs can alleviate this problem to some extent, starved nodes may still exist in multilayer GNNs, as shown in Figure 1.
**Multilayer GNNs**. Yes, it is still reasonable to define "supervision starvation" if there are multiple layers of GNNs. Let us explain. To allow a k-hop starved node to receive information from labeled nodes, the GNNs need to have at least k+1 layers. However, as the number of layers increases, the number of nodes in each node’s receptive field grows exponentially, and the exponentially-growing amount of information needs to be squashed into a fixed-length vector. This will cause a so-called over-squashing issue [1], where crucial messages (supervision information here) fail to reach their distant destinations (starved nodes). That means, *as k increases, the GNNs may still fail to propagate supervision information from labeled nodes to k-hop starved nodes*. Therefore, we think it is still reasonable to define "supervision starvation" even if there are multiple layers of GNNs.
**Optimal Solution**. Using deeper GNNs is not an optimal solution to the SS problem since it will bring up new issues, as illustrated below. **First**, although increasing GNN to 4 layers reduces the number of starved nodes from near 3,000 to near 500 for the Citeseer120 dataset, the number of starved nodes still accounts for nearly $15%$. In fact, the number of starved nodes depends on several factors, including the labeling rate of nodes, the graph sparsification process, and so on. *How to select the number of layers of GNNs to reduce the starved nodes for different datasets and different LGI methods is not easy*. **Second**, *the generalization of GNNs cannot be guaranteed by simply increasing the number of layers*. Deeper GNNs typically provide inferior performance due to over-smoothing [9][20] or over-squashing issues [1]. Empirically, we test the SLAPS method on the Pubmed dataset using a different number of layers, and the corresponding accuracies can be seen in the table below. **Moreover**, *as the number of layers increases, both the number of parameters and the float point operations (FLOPs) increase*. Following the above experiments, the number of parameters and the FLOPs for 2, 4, and 8-layer models are listed in the same table below. In fact, most LGI methods only adopt 2-layer GNNs for effectiveness and efficiency. We will add more discussion on this in our final version.
| **Layers** | **Accuracy** | **Parameters** | **FLOPs** |
|:--------:|:--------------:|:--------------:|:-------------:|
| **2**| 74.86 ± 0.79 | 645.76K | 12.70G |
| **4**| 70.24 ± 1.58 | 680.90K | 13.39G |
| **8**| 41.18 ± 0.88 | 751.17K | 14.76G |
**W 2: U and Q**.
Yes, U and Q will dynamically change as well. In fact, the values of U and Q can be obtained by different methods. For simplicity, we directly adopt the same strategy as in generating A. In this simple setting, U and Q depend on the generated A. Of course, they are both differentiable.
---
Rebuttal Comment 1.1:
Title: Thanks for authors reply!
Comment: Thanks for the authors' reply!
I am still unsure about the reasonability of "supervision starvation". Can we call all the semi-supervised learning tasks 'supervision starvation' because only a limited number of data points have correct label information?
I will keep my score as before.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply and question!
Comment: Dear Reviewer Qy2b,
**Thank you very much for your reply and keeping a good score for us!**
*No, we cannot call all the semi-supervised learning tasks “supervision starvation”*. The “supervision starvation” problem exists in semi-supervised latent graph inference tasks, not general semi-supervised tasks. Let us explain in detail.
As we know, there are two types of weights in latent graph inference methods: **network weights** (i.e., network parameters in GNNs) and **edge weights** (the values of the adjacency matrix). *When we say “supervision starvation”, we mean that some edge weights (not network weights) receive no supervision from any labeled nodes through the semi-supervised training loss*. This will lead to poor generalization since **these under-trained edge weights are inevitably used to make predictions for testing samples at the test time**. In fact, existing latent graph inference methods commonly suffer from this problem.
However, for general semi-supervised learning tasks (latent graph inference is not required), there is only one type of weight: **network weights** (i.e., network parameters). In this case, *all network weights can receive useful supervised information from a limited number of labeled data points through the semi-supervised training loss*. **And the predictions of training and testing samples are calculated based on the same network weights, which are well-trained**. Apparently, the “supervision starvation” problem does not exist here.
In summary, *“**supervision starvation**” refers to some **edge weights** receiving no supervision in latent graph inference methods*. Therefore, we cannot call all the semi-supervised learning tasks “supervision starvation”.
We hope this can address your concern. Thanks again for your help in improving our paper!
Best regards,
Authors | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces a model-agnostic enhancement for current LGI (latent graph inference) methods, aiming to address the assumed issue of unlearned features in a substantial number of nodes and edges, which are believed to negatively impact LGL's generalization performance. It proposes to learn a weighted residual graph, focusing on unlabeled nodes without labeled neighbors and intending to link them with some labeled nodes. Additionally, the proposed method applies a sparsity constraint to the residual graph, preventing significant alterations to the underlying graph topology learned from the base LGI model. Finally, the paper evaluates the effectiveness of this approach by integrating it with several prior LGI models, and finds modest improvements in standard graph learning tasks.
Strengths: The paper's strength lies in its investigation of the important problem of optimizing the structure for neighborhood aggregation in graph neural networks. It introduces a method that addresses the issue of "supervision starvation" by adding labeled neighbors to initially supervision-starved nodes, which is demonstrated to bring benefit to existing LGI methods, especially on larger graphs.
Weaknesses: 1. The paper's proposed method shows moderate improvements, with limited statistical significance on small graphs (Cora and Citeseer). Larger graphs display more significant enhancements. The relationship between improvement and graph size should be explored for better insights. Further improvements for small graph scenarios are needed.
2. The paper lacks clarity in explaining the generation of the residual adjacency matrix. The emphasis on the relationship with CUR decomposition should be weaken since the learning criteria for weights in matrix $\tilde{U}$ are unrelated to reconstructing the target matrix $Q$. Furthermore, the method of selecting "supplementary adjacent points" from the labeled nodes for each 1-hop starved node remains unclear. The authors should be aware that "randomly selecting a subset of $\tau$ nodes from $c$ labeled nodes" is a different statistical event compared to "picking each labeled node at a probability of $\frac{\tau}{c}$". Improved explanations of these aspects are necessary to avoid confusion and strengthen the paper's presentation.
3. The paper lacks significant theoretical and technical novelty. Certain modeling choices are presented without adequate theoretical justification to demonstrate their importance or sensitivity analysis to show their insignificance. Specifically, the content in the paper should be enriched or reorganized to address the following research questions: (1) the reasons and mechanisms behind the significance of k-hop starved nodes in training a GNN through semi-supervised learning (i.e., why they are pivotal nodes), and (2) a comprehensive evaluation of the advantages and disadvantages of selecting k=1 to define the residual bipartite graph between k-hop starved nodes and labeled nodes, emphasizing how the benefits outweigh any drawbacks. Addressing these questions would enhance the paper's contributions and strengthen its overall impact in the field.
Given the aforementioned issues, a major revision of the current version of the paper is recommended before acceptance.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Could the authors provide further elaboration on why the k-hop starved edges are defined as "having at least one endpoint being a $k$-hop starved node"? If the main objective is to identify edges whose weights are not trained in a $k$-layer GNN, wouldn't a more appropriate definition be "both endpoints are $(k-1)$-hop starved nodes"? (0-hop starved = unlabeled)
2. Could the authors provide an analysis of why adding connections between 1-hop starved nodes and labeled nodes improves the model's generalization? It appears that while introducing supervision signals to some unlabeled nodes, these additional connections to the labeled nodes may make them less "similar" to the unlabeled nodes. For example, the degree distribution of labeled and unlabeled nodes might be similar in the original dataset, but after the structural modifications proposed in the paper, the labeled nodes may exhibit significantly higher degrees than the unlabeled nodes. Wouldn't such divergence in the distribution of labeled and unlabeled nodes negatively impact the model's generalization? An explanation of these potential effects would help clarify the benefits and implications of the proposed structural modifications.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank the Reviewer for providing detailed comments. We address your concerns as below.**
**W 1: Graph size**.
We would like to clarify that there is no direct relationship between improvement and graph size. We kindly remind the Reviewer that our method aims to eliminate starved nodes so as to utilize the missed supervision for better LGI. That means the improvement is directly related to the starved nodes, not the graph size. Besides, the number of starved nodes does not depend on the graph size but on the labeling rate of nodes and the graph sparsification process. As listed in Table 1, the results also do not show more enhancements on larger graphs and less improvement on smaller graphs. For example, SLAPS_R obtains a 1.7% improvement on Citeseer120 (nodes: 3,327, labeling rate: 3.61%) while only getting a 0.65% improvement on ogbn-arxiv (nodes: 169,343, labeling rate: 53.70%). Therefore, we think that exploring the relationship between improvement and graph size may not be sensible.
**W 2: Generation of residual matrix**. The generation of matrix $B$ can be seen in Line 195-203, Sec 3.2. In fact, we do not generate $B$ directly. Instead, we generate a smaller matrix $\widetilde U$ first and then obtain $B$ by padding some rows and columns of zeros (see Eq. 6).
**Matrix $Q$**. We would like to clarify that $Q$ is not the learning target, and we do not aim to reconstruct the matrix $Q$. $Q$ is only used in Definition 4 of “CUR Matrix Decomposition”. What we really want is the matrix $U$, which helps us identify and reduce the starved nodes.
**Selection method**. As stated in Line 269, Sec 4, we select $\tau$ closest labeled nodes as the supplementary adjacent points. How to measure the distances depends on what baselines we used.
**Probability**. We need to clarify that we do not “pick each labeled node at a probability of $\frac{\tau}{c}$”. Instead, we “randomly select $\tau$ nodes from $c$ labeled nodes” for each starved node. These two statements are totally different. In Proposition 1, we state “If there are $r$ 1-hop starved nodes, then for any $j \in CM_+$”, which means we consider the relationships between $r$ 1-hop starved nodes and one labeled node here. For any specific labeled node, say LN, if we randomly select $\tau$ nodes from $c$ labeled nodes for a 1-hop starved node, the probability of the node LN being selected is $\frac{\tau}{c}$. Obviously, this is totally different from the statement “pick each labeled node...” since we only consider one labeled node here.
**W 3: Novelty**. In this paper, we identify that graph sparsification is the main cause of the SS problem, which is not explored by existing LGI methods. To address this issue, we propose to replenish the missed supervision for better LGI. Specifically, we first give a novel definition of starved nodes. Then, to identify the starved nodes, we design two simple solutions in Secs. 3.1 and 3.2 respectively. The starved nodes can be further diminished by our proposed regularization graphs. However, there still exists a potential issue we called weight contribution rate decay in Sec 3.3. That’s why we provide two modeling choices: M_U and M_R.
**Reason**. k-hop starved noses are pivotal since we can tackle the SS problem directly by eliminating these nodes. In the SS problem, a number of edge weights are learned without any semantic supervision, which leads to poor generalization because the under-trained weights are used for the predictions of testing samples. If we eliminate the starved nodes, the missed supervision can be replenished to guide the learning and updating of these weights.
**Definition of residual graph**. In fact, we can set any value for k to define residual graphs. However, if we set k>1, there will be a potential issue. For example, if we set k=3 and eliminate the 3-hop starved nodes, 1-hop and 2-hop starved nodes may still exist. To address this, a simple solution is to set k=1 to define the residual graph. Why only setting k=1 works? According to the definition of starved nodes, we know that a k-hop starved node also qualifies as a (k-1)-hop starved node. Therefore, if we eliminate 1-hop starved nodes then all m-hop starved nodes for m>1 will be eliminated, as stated in Line 203. Due to the effectiveness and simplicity, we select k=1.
**Q 1: Definition of starved edge**. Given a k-layer GNN, if one endpoint of an edge is a k-hop starved node, the weight of this edge is learned without any semantic supervision. This is because the k-hop neighbors of this node are all unlabeled, while a k-layer GNN can only aggregate signals from k-hop neighbors. According to this analysis, “both endpoints are (k-1)-hop starved nodes” is not a correct definition for k-hop starved edges. We kindly suggest the Reviewer check Definition 2 again for the meaning of k-hop starved nodes.
**Q 2: Generalization**. This question is essentially asking why eliminating starved nodes helps the model make better predictions for testing samples. As we stated in Line 100, Sec. 2.3, for a general LGI model, a number of edge weights are learned without any semantic supervision and cannot be semantically optimal after training. As a result, the model will exhibit poor generalization since the under-trained weights are used for the predictions of testing samples. Our methods replenish the missed supervision to guide the learning and updating of the under-trained weights, thus making better predictions. Besides, structural modification is necessary for LGI methods since we have absolutely no idea what the real structure of the data looks like. A good LGI model needs to infer a good latent structure from node features. The degree distribution is not a useful indicator that affects the generalization of a model. For example, when using full parameterization as the graph generator for SLAPS [9], all nodes have the same degrees. In this case, SLAPS can still obtain good or even better performance, as shown in [9].
---
Rebuttal Comment 1.1:
Title: Looking forward to further discussions
Comment: Dear Reviewer Qcor,
Sorry to bother you! We are here to see if our responses have resolved your concerns. Do you expect us to have more analyses, results, or discussions for you to make a better evaluation of our paper? (If you do, please let us know asap so that we can have enough time to finish it.)
Thank you very much again for your constructive comments.
Best regards,
Authors of Sumbission 2239
---
Reply to Comment 1.1.1:
Title: Sincerely Expecting Further Discussion
Comment: Dear Reviewer Qcor,
It is only 4 days away from the end (8/21) of the discussion period, and we haven't received any feedback from you regarding our responses. We are here to see if you could spend a few minutes checking our responses.
Your concerns mainly relate to graph size, the generation of residual graphs, the definition of starved nodes, novelty, and generalization, for which we have dedicatedly provided more analysis and explanations to clarify our method. We do want to hear your further opinion about this, which is essential for us to improve the work. Thank you very much!
Best regards,
Authors | null | null | null | null | null | null |
Distribution-Free Model-Agnostic Regression Calibration via Nonparametric Methods | Accept (poster) | Summary: This paper addressed the uncertainty quantification problem in regression models, specifically focusing on individual calibration to characterize prediction model quantiles. To overcome these limitations, they proposed simple nonparametric calibration methods that are both computationally efficient and statistically consistent. Their approach provides insights into individual calibration possibilities and establishes upper and lower bounds for calibration error. They attempted to advance existing theoretical analyses by combining nonparametric and parametric techniques, offering new perspectives on regression calibration regarding the curse of dimensionality and reconciling previous findings on individual calibration impossibility.
Strengths: The paper is overall well written and the guarantees given are technically sound. The authors have demonstrated the effectiveness of their methods through several experiments. The ideas developed are novel to the best of my knowledge but their practicality is questionable. There are many advancements in the field of conformal inference that give similar guarantees with minimal assumptions. I think the paper would benefit from comparing their developed method in varied settings with the existing approaches in conformal inference to prove its efficacy.
Weaknesses: In this paper, the authors tackle the uncertainty quantification problem for regression models, focusing on individual calibration. While the proposed nonparametric calibration methods and the accompanying analysis present some interesting ideas, I have serious concerns about the authors' familiarity with previous works in the field, as well as the lack of necessary citations to support their claims.
One significant issue is the absence of references to previous research on conditional conformal prediction, which is a relevant and well-established framework in uncertainty quantification. The authors should have acknowledged and discussed how their proposed methods relate to or differ from existing conditional conformal prediction approaches. Failure to do so undermines the paper's novelty and raises doubts about the authors' understanding of the current state-of-the-art in the field. There have been various works in this area namely:
1. "Conformalized Quantile Regression" by "Yaniv Romano, Evan Patterson, Emmanuel J. Candès",
2. "Improving conditional coverage via orthogonal quantile regression" by "S Feldman, S Bates, Y Romano"
3. "Class-Conditional Conformal Prediction With Many Classes " by "T Ding, AN Angelopoulos, S Bates, MI Jordan, RJ Tibshirani"
4. "Conformal prediction with conditional guarantees" by "I. Gibbs, J. J. Cherian, E. J. Candès "
5. "Knowing what you know: valid and validated confidence sets in multiclass and multilabel prediction" by "M Cauchois, S Gupta, JC Duchi"
The proposed method also suffers from the curse of dimensionality and the authors have only included experiments with several UCI datasets which are relatively low-dimensional. Whereas on the other hand, advancement in the field of conformal inference has produced methods that seamlessly adapt to very high dimensional datasets like ImageNet, MNIST, etc.
Further, the theoretical guarantees in the paper rely on very strong assumptions, i.e., Assumption 1 of the paper, unlike previous works in this field.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I do not have further questions, all the concerns are summarised in the "weakness" and "limitation" sections.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: The proposed method suffers from the curse of dimensionality as I mentioned before and the method relies on very strong assumptions. The authors have not implemented their proposed method on high dimensional datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the sincere comments and for pointing out our missing literature. In the past week, we spent quite much time working on these papers and related literature. We’d like to summarize our findings below. We hope this better clarifies our positioning, and we look forward to having more discussions with you and other reviewers in the coming week.
Positioning of our work:
The reviewer raised a question on whether such algorithms (with the goal of individual calibration) are practical for high-dimensional setting. As noted in our paper, our work is more aimed at providing a positive result for individual calibration, given that the existing results on individual calibration for regression problems are largely negative. Let’s separate the regime into low-dimensional and high-dimensional.
- Low-dimensional regime: after reading our work, we guess probably all the reviewers agree that individual calibration is to some extent achievable under the lower-dimensional regime; yet this is a fact that is not acknowledged by the existing literature. Importantly, we also justify the necessity of striving for such an individual calibration by the pinball loss from the downstream task such as inventory management or newsvendor problem (Proposition 2), which serves as a motivating example for many calibration papers. These two combined imply that being intimidated by the hardness of individual calibration is unnecessary and cost-ineffective.
- High-dimensional regime: Our paper answers the question of “when individual calibration is achievable”, but we didn't say that individual calibration is always achievable. For high-dimensional data, we agree with the reviewer that individual calibration can be a goal too ambitious to target. Yet, on the positive end, as in Theorem 3 of our paper (the proof of which is technically very straightforward), if the high-dimensional covariates have low-dimensional statistics that induce some conditional independence, then individual calibration can be achieved with the low-dimensional statistics. Practically, the low-dimensional statistics can be obtained by training a neural network with a small number of neurons in the second-last layer. There has been a line of literature studying the ability of feature extraction for neural network models; Theorem 3 says that such development can be useful in achieving individual calibration in a high-dimensional regime.
- The reviewer mentioned high-dimensional datasets such as MNIST and ImageNet. Both datasets are tested only as a classification task by the existing literature on conformal prediction/calibration, while our paper studies on regression problems. The eight datasets in the numerical experiments are chosen because they are the standard ones used in most (if not all) regression calibration papers. For demonstration purposes, we include a new dataset of Yacht which is of higher dimension (d=100, n=2000), and observe similar advantages of our algorithm compared to the benchmarks. The 100-dimensional vector might not be as large as the MNIST or ImageNet, but when working on an even higher dimension, the second-last layer of the neural network should have a dimensionality of a similar magnitude.
Literature on conformal prediction:
We appreciate the reviewer for bringing the papers to our attention. We believe, and as noted by other reviewers, our paper includes a much longer literature list than a normal conference paper; we didn’t miss these papers deliberately (Also, 2 papers mentioned by the reviewer are in fact posted after NeurIPS 2023 submission deadline). Also, as noted in our paper, we found researchers from different communities, conformal prediction, calibration, and quantile regression, working on similar or even identical problem setup but don’t acknowledge much each other’s work. For example, the existing calibration literature doesn’t cite much the papers on conformal prediction, and vice versa. To this end, we hope our work can make at least some minimal effort in syncing the understanding and awareness across communities, and in our paper, we did mention the perspective of conformal prediction throughout the presentation of our results. Hence, we read these conformal prediction papers with great interests and will include a discussion on this line of works in the next version of our paper. We defer a detailed discussion to the Author Rebuttal.
We also want to bring up one thing to the reviewer. The majority of work in conformal prediction follows a two-step procedure, also known as a split procedure, that first fits a mean prediction model and then calibrates the prediction error. Meanwhile, the majority of work in calibration literature follows a one-step procedure, that usually appends a calibration penalty to the mean prediction loss in the loss function. The two-step split procedure has the apparent advantage of avoiding overfitting and thus avoids too small confidence interval as the one-step procedure. Our paper provides a second justification for this two-step procedure (See our response to aPS1), which we believe complements the existing works on conformal prediction.
Lipschitzness assumption:
The Lipschitzness condition in our paper helps answer one key question of conformal prediction. It should be regarded as the minimum assumption to avoid violating requirements (i) and (iii) (see the Author Rebuttal for the requirements), where the term “minimum” is proved by our Theorem 2 and Theorem 6. Theorem 6 states that without such a Lipschitz condition, any estimator, even if asymptotically consistent, can suffer from an arbitrarily low convergence rate. In other words, it is an unavoidable cost to achieve a meaningful (iii). Theorem 2, on the other hand, proves that the minimax lower bound under a Lipschitz condition is matched (up to some poly-logarithmic factors) by our algorithm.
---
Rebuttal Comment 1.1:
Comment: We thank the authors for the clarifications.
"Also, as noted in our paper, we found researchers from different communities, conformal prediction, calibration, and quantile regression, working on similar or even identical problem setup but don’t acknowledge much each other’s work."-- I am sorry but I am not convinced by these line of reasonings. I believe we are posing a problem relevant to the scientific community and proposing a solution that works and somewhat better than existing methods (thus, adding to the novelty). The field of conformal inference has flourished in the recent times and has shown exceptional promise in solving the posed problem both in regression and classification settings. As for split conformal, we have methods like Jackknife + (Barber et al. 2022) which seamlessly avoids data splitting. Given the generalizability of conformal inference type methods across different dimensionality of datasets, types of problems (regression and classification), and computational efficiency, the paper stands incomplete without proper comparison.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the follow-up. In writing the paper and the previous responses/discussions, we try to follow two principles: (i) open-mindedness; (ii) technically rigorous arguments. We understand that this can be a busy week for all the reviewers, but we appreciate it very much if the reviewer can spend some time reading our paper and the previous responses.
Our apologies for writing a lengthy response as before. For the sake of saving time, we provide a TL;DR version for the literature mentioned by the reviewer below:
- Paper 3 and Paper 4 are posted on ArXiv after the NeurIPS 2023 submission deadline. We are not sure if these two papers are also under review at NeurIPS 2023.
- Even so, we aim for open-mindedness and will include these two papers in future versions of our paper. In our previous response, we discussed that Paper 3 and Paper 4 are not aiming for an individual coverage guarantee, which differs from our positioning of individual calibration.
- Paper 1 and Paper 5, together with the new paper Jackknife + (Barber et al. 2022), mentioned by the reviewer, all aim for a marginal calibration guarantee. For the newly mentioned paper (Barber et al. 2022) apart from the marginal calibration guarantee v.s. our individual calibration guarantee, the proposed method can be computationally costly compared to our method because it is a Jackknife-based approach.
- Paper 2 aims for individual calibration but requires much stronger conditions for their theoretical results (See the last few paragraphs in our Author rebuttal).
We see all above as “facts” that are supported by technically rigorous arguments. We are happy to follow up with additional technical discussions if there are any confusions/comments with these facts.
We’d like to separate “facts” from “opinions”, such as the following one which is disagreed by the reviewer:
"Also, as noted in our paper, we found researchers from different communities, conformal prediction, calibration, and quantile regression, working on similar or even identical problem setup but don’t acknowledge much each other’s work."
This is our opinion, which of course, can be agreed or disagreed with by other researchers. We have such an impression/opinion because of the fact that for the 50+ papers in our reference list, the calibration papers mostly don’t mention the work on conformal prediction, while the conformal prediction papers don’t cite the calibration papers. However, even before this discussion, we try to maintain an open-mindedness and draw connections between our results and the conformal prediction literature in our paper.
We thank all the reviewers and ACs for taking the time to read our responses. | Summary: This paper studies uncertainty quantification for the regression problem. In particular,
it considers the estimation of conditional quantiles (of the residuals) via the kernel method. The convergence
rate of the proposed estimator is established, along with a matching lower bound. The proposed
method is evaluated on multiple datasets and compared with other candidate methods.
Strengths: The paper considers an interesting problem; the examples
for showing the unexpected results of existing methods are motivating;
the
solution provided has solid theoretical properties
and show satisfactory empirical performance in
numerical experiments.
Weaknesses: I was wondering about the position of this paper in the line of works
of conditional quantile regression (e.g., Takeuchi et al. (2006); Steinwart et al. (2011)).
A discussion in this direction will be appreciated.
References:
Takeuchi, Ichiro, et al. "Nonparametric quantile estimation." (2006).
Steinwart, Ingo, and Andreas Christmann. "Estimating conditional quantiles with the help of the pinball loss." (2011): 211-225.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. As mentioned above, I wonder how this work
compares with the line of works of (conditional) quantile regression.
2. It might be helpful to also show in the simulations
the results if one directly estimates the conditional quantiles
of $Y$.
3. I wonder if the proposed method can be used within the
framework of conformal inference and achieve distribution-free
marginal calibration as well.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our work and raising inspiring questions.
Direct estimation:
We include more experiments using that directly estimate the conditional quantiles by optimizing pinball loss as the attachment. It does show the advantage of the two-step procedure. We also provide more theoretical explanations for the advantage of this decomposition or two-step procedure in our response to Reviewer aPS1, and we are grateful if you have time and are interested in reading the explanations therein.
Marginal calibration for conformal prediction:
In this work, we focus mainly on the individual calibration objective; and our main motivation when initiating the project is in fact to provide a positive result for the pessimism of individual calibration mentioned by the existing works. For marginal calibration, there are many existing methods in the literature of conformal prediction that enjoy both successful empirical performance and a nice theoretical guarantee (see also our response to Reviewer bxwF). For our method, if one aims for a marginal calibration guarantee, then a simple way to modify Algorithm 1 would be to remove the kernel weighting and assign a uniform weight to all the samples. The marginal calibration guarantee can then be derived under fewer assumptions (without Assumptions 1 (a) and (c)).
Positioning against (conditional) quantile regression:
First, we’d like to thank the reviewer for bringing up these two papers. We believe we have done an exhaustive search over the literature view part of the papers cited in our paper, but we still miss this line of works on conditional quantile regression. As we noted in our paper and our responses to other reviewers, it seems to us that several communities are working on the same problem but aren’t aware of/don’t cite each other. In this light, we have read the papers with great interest and will definitely include them in the next version of our paper.
Both papers utilize the kernelized method and directly predict the quantile function (one-step). Essentially, methods from both papers search for a quantile prediction function over the RKHS function space $\mathcal{H}$. Apart from the comparison between the one-step method and our two-step method in our response to review aPS1, we discuss the positioning of our results against these two works in two additional aspects:
- Theoretically, Takeuchi et al. (2006) derived a performance guarantee for the conditional quantile estimator. Their derivation is based on a Rademacher complexity bound over the function class with a bounded $\mathcal{H}$-norm, and to ensure a consistency result, it requires the true conditional quantile function to have a bounded $\mathcal{H}$-norm. This is in parallel with our Lipschitz class (Assumption 1 (a)). The Lipschitz function class and the bounded $\mathcal{H}$-norm function class overlap but don’t contain each other. Steinwart and Christmann (2011) analyze the same algorithm but adopt a different approach. Instead of imposing a bounded $\mathcal{H}$-norm, they impose a decay rate for the eigenvalues of kernel integral operator, alongside some other assumptions on the underlying distribution (on a minor note, their paper missed a condition in their theorem statement that the chosen kernel should lead to an RKHS dense in $L_1$ space). In short, both papers, together with ours, strive to fight against the impossibility of individual calibration by imposing conditions on the true quantile function so that it belongs to a certain function class. These two works focus on RKHS space while we focus on Lipschitz space; generally, we find the conditions imposed not quite comparable to each other. For the Lipschitz function class, to the best of our knowledge, our work is the first result to provide an individual guarantee.
- Empirically/computationally, our algorithm features an analytical solution and thus is very simple and efficient to implement, while both papers rely on solving a kernelized learning problem that doesn’t scale well with sample size or dimension. For Steinwart and Christmann (2011), it is more of a theoretical work and doesn’t provide any numerical experiments. From a practical viewpoint, our method/framework is more about (i) justifying the widely-adopted two-step calibration procedure and (ii) providing a simple solution and positive result for individual calibration. Though our work and these two papers aim for the same individual guarantee, we don’t quite see our framework as a competitor to them. Because at the end of the day, one can adopt the kernelized method for the mean prediction part (in replacement of a linear model or NN) and/or the error quantile prediction part (in replacement of the simple nonparametric estimator) of our framework.
We hope our response addresses the raised questions. If there are any follow-up questions/concerns, we will get back to you timely in the following discussion week.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: I thank the authors for the response and the comparison, and I look forward to seeing these contents added to the paper. | Summary: The paper considers the uncertainty quantification problem for regression models. First, they proposed an algorithm for simple nonparametric quantile estimator. Then, they further proposed the nonparametric regression calibration algorithm. They also provide theoretical analysis of the proposed algorithms and implications.
Strengths: 1) The paper is well-organized and well-written.
2) The paper provides several theoretical results and implications of the theory.
3) The paper includes extensive experiments.
Weaknesses: 1) For Algorithm 1, do other kernels work for the proposed algorithm? Are there any requirements for the kernels?
2) For Algorithm 2, could the split proportion be different than half and half? The proportion would influence the results. Are there any experimental results to check the effect of the proportion?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) For Algorithm 1, do other kernels work for the proposed algorithm? Are there any requirements for the kernels?
2) For Algorithm 2, could the split proportion be different than half and half? The proportion would influence the results. Are there any experimental results to check the effect of the proportion?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the questions and comments.
The choice of the kernel function:
This simple nonparametric method performs rather robustly with respect to the choice of the kernel function. Essentially, all standard choices of kernels specify a localized weighting regime for error quantile estimation and enjoy similar or even identical theoretical guarantees. Numerically, we presented the naive kernel and the Gaussian kernel (RBF) in the paper but we also tried out the Laplace RBF kernel, normalized triangle kernel, and higher-order kernels on different tasks. And they all give similar performances. We didn’t stress much on the point in the submitted version, but we will include more information about this aspect in the next version of our paper.
Data splitting scheme:
We remark that the split between the mean estimation data and the error quantile estimation data is quite flexible and it doesn’t have to be 50-50. We are sorry for causing the impression of a half-and-half split. The split ratio in fact trades off between learning a good mean prediction model and achieving a decent estimation of quantiles. In the numerical experiments, we take a grid search for ratios between 2:8 to 8:2, and the best proportion is picked based on the performance of the validation set. For some datasets, the best proportion happens to be half-and-half, but this is not always the case. Interestingly, many works on split conformal prediction do recommend such a 50-50 ratio but didn’t provide much theoretical justification. We will update our paper with the point and look forward to see more future investigations on this aspect.
We hope our response addresses the raised questions. If there are any follow-up questions/concerns, we will get back to you timely in the following discussion week. | Summary: This paper proposes a new method for quantile regression and for calibrating prediction intervals in regression. The paper first proposes a simple quantile regression method and shows that this method estimates the true quantile curve at the minimax-optimal rate. For calibrating prediction intervals, the basic idea is to 1) decompose the conditional distribution of the response variable into a conditional mean + noise term, 2) estimate the conditional mean, and 3) apply the above quantile regression method to the distribution of residuals (i.e., respose minus estimated conditional mean). The approach is agnostic to the method used to fit the conditional mean itself, and the experiments demonstrate that the proposed method (as well as a supplementary method that also applies dimension reduction) performs well using feed-forward and recurrent neural nets, as well as random forests.
Strengths: The paper is quite easy to follow. The proposed method seems both simple and effective, and makes very weak assumptions. Figure 1 makes the advantage of individual calibration very clear, and the theoretical guarantees are hence both useful and impressive in light of existing results on the impossibility of individual calibration. The experiments are also fairly thorough and well-presented.
Weaknesses: 1) Table 1: The "Ours Best?" column seems misleading, because it is taking the best of two different methods (NRC and NRC-DR). In several cases, only one of the proposed methods performs best, while the other method performs worse than competitors. However, in practice, one must typically pick one method before knowing which of the two will perform better. So, this column is not really informative of how the proposed methods would perform in practice. I suggest either removing this column. Perhaps a vertical rule could be added to distinguish the current paper's methods from previous methods.
2) The introduction is quite long, and it is not clear to me whether all of this information should be presented so early in the paper.
For example, the discussion on Page 2 about "individual calibration" is unclear since "individual calibration" isn't defined until Page 3. Much of the content could also be refactored into a specific "Related Work" section.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1) One of the paper's key ideas is to decompose the prediction interval problem into a mean regression problem and a noise quantile prediction problem. Are there any cases where this decomposition would fail, or would be expected to perform worse than a method that learns the regression quantiles directly?
2) Relatedly, how does the performance of the mean estimator affect the performance of the prediction interval (e.g., if the mean estimator under- or overfits)?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper is fairly clear about its limitations, such as the gap between the theoretical performance (which suffers from the curse of dimensionality) and the strong empirical results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and the raised questions. We believe clarifying these questions improves the positioning of our work.
Decomposition/two-step procedure:
As noted by the reviewer, our paper adopted a two-step approach which first predicts the mean and then calibrates the quantile of the noise. Empirically, we do find that the two-step procedure is better than the one-step procedure that directly predicts the quantile of $Y|X$ (see the attachment for the new experiment). Theoretically, this has a nice justification too. Specifically, from the minimax lower bound in Theorem 2, the problem’s hardness depends on the Lipschitz constant $L$, the sample size $n$, and the feature dimension $d$. The two-step procedure subtracts an initial regression model $\hat{f}(X)$ from the label $Y$: in the following, we argue why such subtraction can effectively reduce the Lipshitz constant.
Generally speaking, the conditional expectation function $E[Y|X]$ is highly related and very likely to wax and wane together with the quantile function $Q_{\tau}[Y|X]$. In this light, subtracting the conditional expectation function $E[Y|X]$ can very likely smooth out the quantile function $Q_{\tau}[Y|X]$, resulting in a smaller Lipschitz constant. In metaphor, this is quite like the method of controlled variates in Monte Carlo simulation which introduces a controlled random variable that is correlated with the target random variable so as to reduce the variance of the estimator. Here, the conditional expectation function $E[Y|X]$ works as the “controlled variate” for the original “target variate” of $Q_{\tau}[Y|X]$ (the Lipschitz constant $L$ here corresponding to the variance to be reduced in Monte Carlo simulation). Of course, there might exist data distributions where $E[Y|X]$ and $Q_{\tau}[Y|X]$ are completely unrelated, and in that case, the two-step procedure doesn’t guarantee an improvement over the one-step procedure. Also, in reality, we will only use the fitted conditional expectation, $\hat{E}[Y|X]$ (from the mean prediction model). Regardless of overfitting or underfitting, as long as the function $\hat{E}[Y|X]$ behaves relatedly with the quantile function $Q_{\tau}[Y|X]$, it can effectively reduce the Lipschitz constant in logic. In the extreme case, the Lipschitz constant L=0 for the error quantile function $Q_{\tau}[Y-E[Y|X]|X]$ corresponds to the case of homoscedasticity, which is still reasonable for some data distribution; but meanwhile, we can’t really expect the original $Q_{\tau}[Y|X]$ to have $L=0$, i.e., to be a constant function.
Paper writing and introduction:
We thank the reviewer for the kind advice. We agree with the comments on the column of “Ours best” and will adjust it accordingly in the next version of our paper. Also, for the introduction, we apologize for the usage of concepts before formal definitions and will revise it.
In terms of the length of our introduction, it is partly due to the nature of the studied problem that it lies on the intersection of calibration, quantile regression, and conformal prediction, spreading over several fields of machine learning and statistics. The ML literature has a related but different taste than the statistics literature. The calibration literature, which originates from the community of machine learning, is often based on modern machine learning models such as deep neural networks. It usually proposes models at the price of sacrificing some theoretical rigorousness. Concurrently, the statistics literature develops concepts that always have theoretical guarantees. But sometimes it is not so straightforward: some recent advances in conformal prediction claim their excellence in dealing with heteroscedasticity, while still lacking a finite-sample theoretical guarantee on reaching such an individual coverage. For more discussion on this, we refer to our response to Reviewer bxwF. We skip some quantile regression and conformal prediction literature reviews that are less relevant but still result in a long introduction. Moreover, papers from different communities usually don’t cite each other much, though they are working on a similar or even the same problem. In this light, we hope our work can make at least some contribution to improving the awareness and synchronizing the language/understanding for researchers from different communities on this same problem.
We hope our response addresses the raised questions. If there are any follow-up questions/concerns, we will get back to you timely in the following discussion week.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response.
Regarding the decomposition: I understand why this approach reduces the Lipschitz constant of the estimand and thereby reduces the difficulty of estimation. My question was whether there are any counterexamples where this would be expected to perform poorly (or less well than a one-step procedure). If so, the paper would be made stronger and clearer by discussing such a counterexample.
Regarding the writing of the introduction, I understand the reason for the length (and I appreciated what felt to me like a thorough literature review). My suggestion was that it could be better organized for the reader (e.g., by using some more subsection/paragraph headers, or by moving some of the content that isn't needed to understand the proposed method into a "Related Work" section). Although it's important to point out gaps in the existing literature (e.g., in a "Related Work" section), I feel that the basic motivation for proposing an approach shouldn't depend so heavily on the existing literature (which is always changing).
I don't necessarily expect the authors to reply to the above points in the discussion period -- just some things to think about as they continue to revise the paper. Overall, I still feel this paper gives a simple, effective, and novel approach to an important problem, backed by solid experiments and theory, and so I intend to keep my score of 8.
That said, I am not up-to-date on the conformal inference literature (the main reason for my low confidence of 2), and I defer to Reviewer bxwF on whether critical papers are missing in this regard (beyond what the authors could easily add to the camera-ready version).
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for raising the points; we will continue thinking about them and include more content in these aspects in the future version of our paper. | Rebuttal 1:
Rebuttal: We thank the reviewers for taking time to read our papers and for all the helpful feedback. We look forward to more follow-up discussions in the coming week.
We provide an individual response to each reviewer, and we'd like to use the extra space below to discuss the assumptions of our paper in response to the questions raised by Reviewer bxwF on assumptions and comparisons against conformal prediction literature. Sorry for the caused inconvenience and confusion.
The only assumption made in our paper is Assumption 1. Part (a) and part (b) are standard and critical in achieving individual calibration guarantees. Part (c) is used to obtain the finite-sample convergence rate. Essentially, a by-product of our analysis derives a concentration argument for quantile estimators. Part (c) is indeed milder than the existing literature on quantile bandits and nonparametric density estimation that also derives such quantile concentration results. We believe this part of the assumption compared to the literature is of independent interests and a contribution itself. The key is that we adopt a new analysis approach that combines both ideas for parametric and nonparametric analysis. We refer to more discussions in Line 210-219 of our paper.
Now we proceed with a detailed discussion of the 5 papers mentioned by the reviewer. After careful examination, we don’t believe any of this work (or other works mentioned in our paper’s reference list and our response to the reviewers) provides a comparable result as ours under a comparable or weaker assumption.
All 5 listed papers are developed in the area of conformal prediction, where the goal is to give a coverage set that contains the true label with a certain probability. Thus the goal of conformal prediction is no harder than our quantile prediction task since correct quantiles induce desired coverage sets. Among these 5 papers, Paper 3 and Paper 5 are for classification problems, and the remaining 3 papers are for regression problems as ours.
First, we rephrase the “impossible triangle” summarized in Paper 4: (i) a conditional/individual coverage (as our Definition 3) (ii) make no assumptions on the underlying distribution (iii) has finite-sample guarantee and asymptotic consistency. Such a triangle is shown by Vovk (2012) and Barber et al. (2021) to be impossible to achieve simultaneously. The listed works as well as ours follow different ways to reconcile the impossibility:
Paper 1 and Paper 5 focus on marginal coverage (as we define in our Definition 1), which greatly relaxes (i). Paper 3 and Paper 4 consider some milder violations of (i) but in different ways. Paper 3 considers a “clustered conditional coverage”, which is parallel to the group calibration defined in our Definition 2. Paper 4 considers another midway between marginal coverage and conditional coverage. Yet, these two papers are still far from conditional coverage (i).
In short, all these 4 papers have relaxed requirement (i). But what’s the cost to reach such individual coverage theoretically and what’s the price of keeping the requirement (i)? These questions are not answered by these papers. Our paper answers them with minimal relaxations on the other two: we only relax the “no distribution assumption” requirement (ii) with our Assumption 1. In comparison, Paper 2 also doesn't relax requirement (i) and is the most related work to ours. However, it relaxes both (ii) and (iii), and it violates (ii) to a further extent than ours. To see it, the algorithm in Paper 2 is based on a regularized pinball loss/ interval score loss, where the regularizer forces the covering interval length to be independent of the coverage indicator. It achieves a theoretical guarantee that the true conditional quantile is the minimizer of the regularized population loss under (1) a realizability assumption (that is, the true conditional quantiles lie in the function class considered) and (2) an assumption that the conditional distribution of $Y$ on $X=x$ is continuous with respect to $x$.
Apart from stronger assumptions, Paper 2's theoretical results have several limitations. First, it achieves only asymptotic consistency compared to our finite-sample guarantee in Theorem 1, which means that Paper 2 violates requirement (iii). Besides, it only achieves a necessary condition that the true conditional quantile is the minimizer of the population loss but not vice versa. Second, Paper 2 violates (ii) to a much deeper content than ours. Note that the key assumptions made in Paper 2 are the zero-approximation-error assumption and the continuity assumption. The former is not a weak one in the statistical learning theory, meaning that there should be no model misspecification at all, which limits the guarantee from a desirable “distribution-free” property. On the contrary, our algorithm can have a finite-sample guarantee with only three very general assumptions. Even for the continuity part of which the reviewer is unsatisfied, our assumption can still be relaxed to a weaker one than Paper 2 to reach the same asymptotic result. To see that, our result only requires the conditional quantile to be continuous, while Paper 2 requires continuity for the whole conditional distribution. Third, the regularization term in Paper 2 is related to a zero-one indicator, which is discontinuous for gradient descent. The authors twist their algorithm by replacing it with a smooth approximation to overcome the issue, without any formal guarantee on the twisted one. In contrast, our theory accompanies the exact same algorithm that we propose.
In short, many existing works on calibration completely ignore works on conformal prediction. We didn’t do this in our paper; in contrast, we’d like to call for more mutual awareness between these communities. Also, while we position our work along with the calibration literature, our theoretical analysis complements the existing literature on conformal prediction.
Pdf: /pdf/2193c567a975933486a8fadb064b49bcda25dc2c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Maximum State Entropy Exploration using Predecessor and Successor Representations | Accept (poster) | Summary: This paper addresses the problem of maximum state entropy exploration in environments without rewards. Especially, it proposes a method to learn an history-based policy maximizing the entropy over the states sampled in a single trajectory. The method, called $\eta\psi$-Learning, combines predecessor representations, to keep memory of the previously visited states, and successor representations, to predict the states that will be visited in expectation under the current policy. The introduced algorithm is tested against MaxEnt in a set of toy domains and some continuous control tasks, it is also briefly tested against VariBAD in gridwolrd domain.
Strengths: - The paper proposes a simple yet effective method to learn non-Markovian policies that maximize the state entropy induced by a single trajectory;
- The introduced method remarkably improves the performance of MaxEnt, here taken as a representative algorithm for maximum state entropy exploration with Markovian policies;
- The ideas are presented with clarity and with some nice visualizations (e.g., Fig. 1 and 3) to support the intuition.
Weaknesses: - It is unclear how the procedure can be scaled up to more challenging domains, e.g., image-based inputs;
- The method is only tested against MaxEnt, which is not the state-of-the-art method for state entropy maximization, especially in continuous control tasks.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: This is an overall interesting paper, tackling a relevant problem raised by (Mutti et al., 2022a) on how to learn non-Markovian policies for maximum state entropy exploration efficiently. Especially, the authors propose to circumvent the inherent computational intractability of the problem with a well-motivated function approximation approach, in which the current decision is conditioned on a compact representation of the history and a forecast of the future state visitations under the policy.
Although the paper does not introduce particularly novel or surprising ideas, it presents the method with clarity and showcases a brilliant performance for $\eta\psi$-Learning, at least in comparison with MaxEnt. While those might be sufficient reasons for acceptance, I believe that including a discussion on how this procedure can be scaled up to more challenging domains, such as image-based tasks, would make for a significant leap in the value of the paper.
*(Major. How to scale up the approach?)*
The experiments show a remarkable improvement over MaxEnt, but previous works (e.g., Mutti et al., 2021) have demonstrated how Markovian policies maximizing the state entropy can be learned in challenging domains. Instead, it is unclear how the predecessor and successor representations of $\eta\psi$-Learning can be scaled to truly high-dimensional settings. While the method may be trivially adapted to continuous settings via discretization, it is unclear whether it can match the performance of previous works based on $k$-NN entropy estimators. Can the authors discuss avenues to scale their approach to high-dimensional domains? Estimating $\psi_\theta$ in practice would be significantly easier than state density estimation?
*(Estimating the state visitations instead of the entropy)*
In $\eta\psi$-Learning, the quality of an action is evaluated in terms of the future visitations it will induce under the policy, from which the entropy of the trajectory can be then computed. However, the task of estimating the future state visitations seems strictly harder w.r.t. what is really needed to make an optimal decision, i.e., an estimate of the entropy that future visitations will induce. I am wondering whether there is a way to directly target entropy estimation instead of future state visitations in $\eta\psi$-Learning.
*(Experiments. MaxEnt performance)*
The learning curve of MaxEnt is quite underwhelming: While the inferior performance of Markovian policies against $\eta\psi$-Learning is reasonable, I would have expected to see some learning progress for MaxEnt as well, especially in domains in which the random policy is far from the optimal strategy, such as in ChainMDP and RiverSwim. Can the authors clarify what is happening with the learning curve of MaxEnt?
*(How the approach relates to Forward-Backward representations?)*
A previous paper (Touati and Ollivier, Learning one representation to optimize all rewards, 2021) also addressed unsupervised reinforcement learning through representations of the past and future visitations, which they call forward-backward representations, bearing some similarity with the predecessor and successor representations of $\eta\psi$-Learning. Differently from this paper, they make learning these representations the actual unsupervised objective, and they advocate for directly employing forward-backward representations for zero-shot adaptation to the supervised task, instead of running state entropy maximizing policies first. Can the author compare $\eta\psi$-Learning with this previous approach, and explain whether the successor representation might be directly exploited in a supervised adaptation phase?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The paper does not include an explicit assessment of the limitations of the presented approach, which would add significant value to the paper in my opinion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and valuable feedback. We aim to address the concerns in the following:
> MEPOL as baseline
We thank the reviewer for pointing this out and have added MEPOL[1] as baseline (Figure 1 of rebuttal PDF) and observe that $\eta\psi$-Learning outperforms MEPOL on continuous control tasks of Reacher and Pusher when compared over a single trajectory of finite length.
> Major. How to scale up the approach?
Learning to explore more complex tasks with high-dimensional input spaces would require using a better representation learning method and a mechanism to estimate successor and predecessor representations. For representation learning, existing methods that use auxiliary losses, inverse/forward dynamics, or random network-based features can be used. The more challenging task is learning the SR and future works can explore using Successor Measures[2] and ProtoValueNetworks[3].
> Estimating the state visitations instead of the entropy
Thank you for raising this point. While predicting future state visitations may seem harder than needed for entropy prediction, it is important to note that optimal decision depends on which states are visited multiple time steps into the future. It is possible that there exist more efficient algorithms for predicting this entropy, but to our knowledge such algorithms do not appear in the published literature and $\eta\psi$-Learning is the first of this kind. Another challenge with estimating entropy is that the estimator needs to adapt to the changing policy during training. In the proposed algorithm, this is mitigated as both the entropy and policy directly depend on the estimated SR vector requiring no additional updates to estimate entropy given the policy. In the revised version, we have highlighted this aspect and state that future work would involve discovering other more efficient methods for estimating the entropy induced by future state visitations.
> Experiments. MaxEnt performance
In the MaxEnt paper, the evaluation was done by computing the state visitation distribution across multiple trajectories. However, in this work, we focused on learning policies that can optimize the entropy and attain optimal coverage within a single trajectory. In Appendix H.2, we evaluated MaxEnt across multiple trajectories and showed that the performance of MaxEnt improves with the number of trajectories. However, on Reacher is a harder task to solve as the state space and action space are complex compared to the grid-based environment, we see that the performance of MaxEnt improves during training (Figure 4(a) in the main paper). Furthermore, a random policy was found to have small coverage of the state space even across multiple trajectories as the agent starts at one of the corners in the state space and needs to be efficient to cover the state space.
> How the approach relates to Forward-Backward representations?
Thank you for pointing out this connection. Intuitively, the Forward-Backward representations capture similar state visitation statistics as the predecessor and successor representation used by $\eta\psi$-Learning. Mathematically, the forward-backward (FB) representation factorizes the Q-function in an RL setting in a very different way than the $\eta\psi$-Learning algorithm. The FB method also focuses on a reward-maximization setting instead of exploration and are only conditioned on states whereas the $\eta\psi$-Learning algorithm conditions these representations on trajectories. During training, the focus of FB learning is on representation learning using an offline dataset where it is assumed that the agent does not have to explore the environment. The learned FB representations are then used to solve multiple different tasks using the same representations, assuming that reward parameterization is known. However, the $\eta\psi$-Learning algorithm learns exploratory policies that could be used to initially explore an unknown task to determine this reward parameterization. As suggested by the comment, the $\eta\psi$-Learning algorithm complements the FB method. We believe that integrating the two systems into a cohesive RL agent is an interesting avenue for future research and have added this discussion to the paper.
> The paper does not include an explicit assessment of the limitations
We have added a section titled limitations in the paper. We mention the points below in brief and have expanded on them in the paper.
1. *Scaling to high-dimensional inputs:* We used the same points from the discussion above on scaling the method.
2. *Environments with changing dynamics:* Currently, the method is limited to exploring within the same environment where transition dynamics do not change, and extending to procedural environments is hard as it requires SR vectors that can adapt to the changes in the environment and we leave it for future research.
3. *Architectural priors for estimating SR:* The successor representations use a GRU to encode prior states and future research can explore having better architectures like Transformers or S4 that have better long-term memory.
4. *Estimating predecessor representation:* Future research can also explore learning predecessor representation vectors as done in [4] to improve sample efficiency.
In addition to the limitations, we have added the points on state visitation instead of entropy and connection with FB representations in the discussions. We hope we addressed the concerns and would be happy to take more questions. We hope the reviewer will consider increasing the score.
#### References
[1] Mutti et al., "Task-agnostic exploration via policy gradient of a non-parametric state entropy estimate." AAAI’21.\
[2] Touati et al., "Learning one representation to optimize all rewards." NeurIPS’21.\
[3] Farebrother et al., "Proto-value networks: Scaling representation learning with auxiliary tasks." ICLR’23.\
[4] van Hasselt et al., "Expected eligibility traces." AAAI’21.
---
Rebuttal Comment 1.1:
Comment: We thank the reviewer again for their review. We have added ways to scale the proposed method and added more baselines. We also discussed on comparison with FB representation and estimation of state entropy instead of state visitation and updated the paper with these points.
As the end of the discussion period is approaching, we would like to kindly ask you to review the changes and assess whether the concerns are addressed. If so, we hope that you would be willing to increase your score.
Thank you for your time,\
The Authors
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer RqRN,
We hope that you've had a chance to read our rebuttal. As the end of the discussion period is approaching, we would greatly appreciate a reply as to whether our response and clarifications have addressed the issues raised in your review.
Thank you for your time,\
The Authors
---
Rebuttal Comment 1.2:
Title: After response
Comment: I am very sorry for my late reply.
I want to thank the authors for their detailed comments on the points I raised. I am happy to keep my original positive evaluation, and I will recommend accepting this paper in the private discussion. | Summary: This paper shows a combination of "succesor" and "predecessor" representations can be used to develop an efficient maximum entropy exploration policy.
Strengths: - Overall, the paper is clearly written and makes a useful contribution to the exploration literature.
- As far as I know, the ideas are novel.
- The authors make a strong empirical case for their algorithm.
- I'm a bit unsure about the significance of the algorithm, beyond the empirical results shown in the paper. I'm not sure whether the paper is sufficiently ground-breaking to have a significant impact on the broader reinforcement learning literature.
Weaknesses: - I think the authors are a bit loose with their arguments about human cognition. It is hotly debate to what extent cognition depends on language in a strong way. It also important to note that one can endorse a "language of thought" hypothesis about high-level cognition without endorsing the hypothesis that this corresponds to natural language. In any case, I appreciate that these points have little bearing on the substance of this paper.
- Eq. 1 could benefit from more explanation.
- Please state what error bars show in figures.
UPDATE: the authors have addressed my comments.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The authors should be able to easily address my releatively minor comments in the "weaknesses" section.
More broadly, I think the general usefulness of this approach will depend on how it can be applied beyond the maximum entropy exploration setting.
UPDATE: the authors have addressed my comments.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: In the Discussion, the authors discuss several ways to improve and extend their algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and valuable feedback. We aim to address the concerns in the following:
> I think the authors are a bit loose with their arguments about human cognition.
We thank the reviewer for bringing this up and agree with the fact that it is not clear how cognition depends on language. As mentioned by the reviewer, we have just mentioned them as part of conceptual motivation and focused on the problem of designing novel methods for efficient exploration.
> Eq. 1 could benefit from more explanation.
We thank the reviewer for pointing this out. To clarify this, we have adjusted the writing between lines 83-89 to:\
For a trajectory $\tau=(s_1,a_1,...,a_{h-1},s_h)$ of length $h$, we want to compute the state visitation distribution to estimate the entropy. Each state within the trajectory can be formally expressed as a vector by first encoding the state $s_t$ as a one-hot bit vector $e_{s_t}$. The h-step state visitation distribution for $\tau_h$ can be computed by marginalizing across the time steps:
$$\xi_{\gamma,\tau} = \sum_{t=1}^h \gamma(t) e_{s_t},(Eq. 1)$$
where $\gamma: \mathbb{N} \to [0, 1]$ is the *discount function* (we denote the set of positive integers with $\mathbb{N}$), such that $\sum_{t=1}^h \gamma(t)=1$. Using the normalization in the discount function is necessary as it ensures that $\xi_{\gamma,\tau}$ is a probability vector. We note that this use of a discount function is distinct from using a discount factor in common RL algorithms such as Q-learning but using a discount function is necessary as we will elaborate in the following section. The expected state visitation distribution for a policy $\pi$ can be obtained by generating multiple trajectories using $\pi$ and computing the average across them to get the expected state visitation distribution, denoted by $E_{\tau} [\xi_{\gamma,\tau}]$.
> Please state what error bars show in the figures.
We mentioned in line 237 in the paper that the error bars denote the 95% confidence interval. However, we agree with the reviewer and have added this detail in the caption of the figures for more clarity.
> I think the general usefulness of this approach will depend on how it can be applied beyond the maximum entropy exploration setting.
We agree with the reviewer that the proposed method will have more impact when applied to broader scenarios. Prior methods on maximum state entropy exploration were optimized to have optimal entropy across multiple trajectories of long length. However, to extend such methods to general tasks there is a need for more efficient policies that can explore the state space with minimal interactions. We believe $\eta\psi$-Learning by learning to explore efficiently within a single episode of limited length is a step towards using such policies for standard RL tasks. In the paper, we demonstrate the comparison of $\eta\psi$-Learning with Meta-RL methods where exploration is required at the beginning to find rewarding states efficiently during evaluation (Figure 4(c)).
We also added an additional experiment on a sparse MountainCar environment where the agent is rewarded only after reaching the goal state. Such tasks demand exploration at the start to gather rewarding transitions to improve sample efficiency. We compare with the following baselines: random agent, TD3 [1], TD3 combined with count-based bonus as an intrinsic reward (TD3-Count), and TD3 combined with first occupancy bonus [2] as an intrinsic reward (TD3-First). We also propose a variant of our method combined with TD3 and call it TD3-$\eta\psi-Learning$, which learns 2 critics- one to estimate the SR as mentioned in Algorithm 2 ($Q_{expl}$), and the other to estimate the Q-function conditioned on the current state as done in TD3 ($Q_{ext}$). Analogous to TD3, the latter critic is learned using extrinsic rewards. Lastly, to update the actor, the agent optimizes for the overall Q-function which is defined as the linear sum of both Q-values based on extrinsic rewards and entropy-based term: $Q_{total} = Q_{ext} + \beta Q_{expl}$.
Figure 4 In the rebuttal pdf presents the results on Sparse MountainCar environment, where the proposed method outperforms the baselines. We have added this experiment with a pseudo-code for the TD3-$\eta\psi$-Learning and leave it for future work to leverage this approach for harder exploration tasks.
We hope we addressed most of the concerns and hope you would be willing to increase your score.
#### References
[1] Fujimoto et al., "Addressing function approximation error in actor-critic methods." ICML’18.\
[2] Moskovitzet al., "A First-Occupancy representation for reinforcement learning.", ICLR’22
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for comprehensively responding to my comments. I feel that the paper should be accepted, and I am raising my score to 8. | Summary: This paper proposes a new exploration method under maximum entropy RL settings. At each time step, the agent selects the action that maximises the expected entropy of the finite-length trajectory. The trajectory entropy is decomposed into two terms, based on variants of the predecessor representation and successor representation, respectively. The authors proposed separate training frameworks under discrete and continuous action spaces. The resulting agent is evaluated in grid worlds with different configurations.
Strengths: - The proposed decomposition of trajectory entropy objective is novel;
- The authors provide comprehensive description of Q-learning and policy gradient training under discrete and continuous action spaces, respectively.
- Empirical evaluations are comprehensive and coheres with the arguments in the paper;
Weaknesses: - The predecessor representation part (for the computation of the entropy of past trajectory) seems unnecessary and does not contribute to action selection, could the authors elaborate on this point?
- The evaluations in continuous control tasks is still within the grid world environment, could the authors evaluate the proposed agent in standard exploration-demanding continuous control tasks;
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See questions in Weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and valuable feedback. We aim to address the concerns in the following:
> The predecessor representation part (for the computation of the entropy of past trajectory) seems unnecessary and does not contribute to action selection, could the authors elaborate on this point?
The entropy is dependent on the state visitation distribution which is computed across the entire trajectory in the episode. At any time (T), the agent has access to the history of states and needs to take action to maximize the entropy of the state visitation distribution. Since the visited states till time T are known to the agent, they are used to compute the visitation distribution for the first T time steps resulting in the predecessor representation vector (Equation 5). To estimate the distribution of the future states, the agent predicts the expected visitation distribution of future states—the successor representation vector as defined in Equation 6. The predecessor representation is used to encode the visitation distribution of past states and the successor representation is used to predict the visitation distribution of future states (see our example in Figure 1). In Equation 7, we propose to aggregate them, which gives the expected state visitation distribution for the whole trajectory. The Q-function is defined as the entropy of this estimated state visitation distribution. Although the successor representation is conditioned on the previously visited states, we need predecessor representation to calculate this expected state visitation distribution and thus the state visitation entropy of a single trajectory, as stated in Equation 7. Since the agent selects an action based on the Q-function, the action selection is conditioned on the state visitation distribution which further depends on the predecessor representation. We have expanded on this in the revised version of the paper to explain the importance of predecessor representation in the action selection.
> The evaluations in continuous control tasks is still within the grid world environment, could the authors evaluate the proposed agent in standard exploration-demanding continuous control tasks;
The experiments on continuous control tasks were conducted on the state space provided by the environment. The input to the recurrent architecture~(Figure 5 in Appendix) is the continuous 8-dimensional and 27-dimensional vector obtained from Reacher and Pusher environments, respectively. The predecessor and successor representation is computed over the discretized vector obtained using the (x, y) coordinates of the fingertip. To further show that the method can scale to harder continuous control tasks where we would want to compute the entropy over a larger state space, we conducted an experiment on HalfCheetah environment. Taking inspiration from ProtoValueNetworks [1] to avoid the curse of dimensionality while discretization, we discretize each dimension of the state space separately and compute the overall entropy by averaging the entropy across all dimensions. The successor and predecessor representations are computed on the 17-dimensional state space, where each part is discretized into 10 bins for our experiments. Also, the state coverage in evaluation metrics is computed as the average coverage across the discretized dimension. We report the results in Figure 2 in rebuttal PDF where the results are obtained across 5 seeds. We compare with MEPOL [2] which uses Markovian policy and uses kNN to estimate the entropy. $\eta\psi$-Learning was found to have better coverage and entropy over the discretized state space demonstrating that the proposed discretization can be leveraged to scale the proposed method to large state space. We leave it for future research to extend the method to explore POMDPs with more complex environments, including images and have added these discussions to the revised version of the paper.
We thank the reviewer for the feedback. We hope we addressed most of your questions, and hope you consider updating your score.
#### References
[1] Farebrother et al., "Proto-value networks: Scaling representation learning with auxiliary tasks." ICLR’23\
[2] Mutti et al., "Task-agnostic exploration via policy gradient of a non-parametric state entropy estimate." AAAI’21.
---
Rebuttal Comment 1.1:
Comment: We thank the reviewer again for their review. We have elaborated on the importance of predecessor representation and present how the proposed method can be scaled to tasks with more complex state spaces. We would be happy to answer any further questions.
As the end of the discussion period is approaching, we would like to kindly ask you to review the changes and assess whether the concerns are addressed. If so, we hope that you would be willing to increase your score.
Thank you for your time,\
The Authors
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer L5xE,
We hope that you've had a chance to read our rebuttal. As the end of the discussion period is approaching, we would greatly appreciate a reply as to whether our response and clarifications have addressed the issues raised in your review.
Thank you for your time,\
The Authors | Summary: This paper proposes a novel exploration algorithm in RL by combining the successor representation with the predecessor representation and maximising episode-level entropy of state visitation. The proposed approach demonstrates improvement over the MaxEnt baseline on simple Gridworld and continuous control environments.
Strengths: - This paper presents a nice way to bridge the successor and predecessor representation to maximise entropy in the visitation of states in an episode.
- The paper is clearly written and well motivated.
Weaknesses: - In the experiments, the proposed approach is only compared with the MaxEnt baseline. While this is a good baseline to compare with since the objective is similar, the standard exploration baselines are missing: 1) epsilon-greedy exploration, 2) count-based intrinsic motivation approaches like UCB, 3) random action baseline, and optionally 4) auxiliary objectives for exploration, like curiosity-based learning. While I agree with the related work section that these works learn Markovian policies while the proposed work learns a policy conditioned on the full history, a couple of them should still have been included, to place this algorithm in the overall landscape of exploration algorithms.
- Since the successor representation and the predecessor representation in this case both depend on the full history, it is not possible to disentangle the impact they have on exploration and the evaluation metrics as there is no baseline included in which the successor representation depends only on the current state.
- A relevant paper that is missing from the related work section, and also from the baselines, is "A First-Occupancy Representation for Reinforcement Learning" by Moskovitz et al, which is another state representation that indicates the time of first access of a state by the agent and has been shown to be conducive to exploration.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please refer to the comments listed in the Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed general high-level ethical concerns with this line of work, but the limitations of the proposed approach have not been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and valuable feedback. We aim to address the concerns in the following:
> In the experiments, the proposed approach is only compared with the MaxEnt baseline.
We thank the reviewer for pointing this out. We did an experiment with the sparse MountainCar environment (Figure 4 in Rebuttal PDF) and showed that the proposed method promotes exploration in sparse reward environments. We have described this experiment in more detail in the General Response section. Furthermore, we did not compare intrinsic curiosity and count-based methods on reward-free tasks as such methods do not learn exploratory policies at convergence. This is because the intrinsic curiosity or the count-based bonus becomes zero or uniform at convergence. This was also discussed in Section 2 in SMM [1], where they elaborate on the difference between such methods and the current method’s line of work. We agree with the reviewer that the exploration algorithm based on maximum state entropy exploration should be compared with the prior works on exploration and hope the MountainCar experiment presents the benefit of the proposed method. Lastly, we have also added MEPOL [2] which learns a Markovian policy and uses a non-parametric entropy estimator to learn optimal policies for continuous control tasks, and a random policy as additional baselines (Figure 3 in rebuttal PDF).
> There is no baseline included in which the successor representation depends only on the current state.
As suggested by the reviewer, we conduct experiments on Reacher and Pusher environments where the successor representation is conditioned over the current state only. However, the predecessor representation is only used to compute the state visitation distribution which is necessary for the objective function. In Figure 3 of rebuttal PDF, we can observe that the modification does not perform well on both these tasks, and learning SR conditioned on the history of visited states is necessary for learning optimal behaviors.
> A relevant paper that is missing from the related work section, and also from the baselines, is "A First-Occupancy Representation for Reinforcement Learning"
We thank the reviewer for pointing this out. We were unaware of this work and have added this paper to the related works section. Furthermore, in the experiment on Sparse MountainCar, we also used the First Occupancy-based intrinsic reward as the intrinsic bonus in each episode for comparison, where the proposed method was found to perform better.
> The authors have discussed general high-level ethical concerns with this line of work, but the limitations of the proposed approach have not been discussed.
In this work, we focus on developing a method that can be optimized with maximum state entropy objective and learn to explore within a single episode of finite length. As with any new approach, there are certain limitations:
1. *Scaling to high-dimensional inputs:* Learning to explore more complex tasks with high-dimensional input spaces would require using a better representation learning method and a mechanism to estimate successor and predecessor representations. For representation learning, existing methods that use auxiliary losses, inverse/forward dynamics, or random network-based features can be used. The more challenging task is learning the SR and future works can explore leveraging methods like Successor Measures[4] or ProtoValueNetworks[4].
2. *Environments with changing dynamics:* The learned SR depends on the environment dynamics and the policy, and we learn SR for a fixed environment in this work. However, many real-world tasks require exploration in an environment with changing dynamics~(procedural environments [5]). A potential direction is learning universal successor representation approximators [6] where the successor representations are conditioned on a context that defines the environment and we leave this for future research.
3. *Architectural priors for estimating SRs:* The successor representations use an RNN which is known to suffer from vanishing gradient problems. Many real-world tasks require agents to retain information over multiple timesteps. Future research can explore having better architectural priors like Transformers or S4 that have better memory and are known to work well on complex tasks.
4. *Estimating predecessor representation:* In this work, we computed the predecessor representation as the summation of the prior state representations. However, recent methods like Expected Eligibility Traces [7] show better sample efficiency and we leave leveraging such methods for future research.
Lastly, we have added these limitations to a separate section in the paper. We hope we addressed most of your questions, and hope you consider updating your score.
#### References
[1] Lee, et al., "Efficient exploration via state marginal matching." arXiv preprint arXiv:1906.05274 (2019).\
[2] Mutti et al., "Task-agnostic exploration via policy gradient of a non-parametric state entropy estimate." AAAI’21.\
[3] Touati et al., "Learning one representation to optimize all rewards." NeurIPS’21.\
[4] Farebrother et al., "Proto-value networks: Scaling representation learning with auxiliary tasks." ICLR’23.\
[5] Zha et al., "Rank the episodes: A simple approach for exploration in procedurally-generated environments." ICLR’23.\
[6] Borsa et al. "Universal successor features approximators." arXiv preprint arXiv:1812.07626 (2018).\
[7] van Hasselt et al., "Expected eligibility traces." AAAI’21.
---
Rebuttal Comment 1.1:
Comment: We thank the reviewer again for their review. We have added more baselines, an experiment where the SR is only conditioned on the current state, and also experimented in a sparse reward environment to compare with other exploration-based methods.
As the end of the discussion period is approaching, we kindly ask the reviewer to engage in further discussion and we hope the reviewer could adjust their score accordingly if all raised concerns are addressed.
Thank you for your time,\
The Authors | Rebuttal 1:
Rebuttal: Firstly, we thank the reviewers for their time and constructive feedback. We hope to address the concerns during the rebuttal and would be happy to answer more questions.
In this work, we developed an algorithm to learn exploratory policies at convergence that can explore the state space efficiently within a finite-length trajectory. Such policies can benefit generalization in different applications like Meta-RL and episodic exploration. Maximum state entropy exploration is a potential direction for learning such policies. However, prior works are not very efficient as they either learn a Markovian policy, optimize for the state coverage over multiple long trajectories, or learn a mixture of stochastic policies. Due to these shortcomings, they are not widely used for solving tasks in RL. To address these concerns, we introduce $\eta\psi$-Learning and demonstrate that the proposed algorithm can learn to efficiently explore the state space within a finite length trajectory. $\eta\psi$-Learning achieves this by combining predecessor and successor representation to estimate the state-visitation distribution and utilizing this to optimize the entropy-based objective. Mutti et al.’22 [1] theoretically showed that learning such policies that achieve zero-regret is NP-Hard and we develop a practical algorithm to solve such tasks. We hope that the proposed method bridges the gap of leveraging policies learned using maximum state entropy exploration for more complex tasks in RL.
In Figure 4(c), we show that when compared with VariBAD, $\eta\psi$-Learning is more efficient at finding the reward function. We are adding a few more experiments to demonstrate the broader applicability of $\eta\psi$-Learning (discussed below):
1. To compare with other exploration algorithms in standard RL tasks, we performed additional experiments on the Sparse MountainCar environment. The agent receives a positive reward after reaching the goal position after which the episode terminates and no reward in other states. We chose this environment because the reward is sparse, and it is hard for the agent to discover the reward function as the agent needs to plan to explore the top of the hill. For the baselines, we compare with a random agent, TD3 [2], TD3 combined with count-based bonus as an intrinsic reward (TD3-Count), and TD3 combined with first occupancy bonus [5] as an intrinsic reward (TD3-First).\
We also propose a variant of our method combined with TD3 and call it TD3-$\eta\psi$-Learning. For this, we propose to learn 2 critics- one to estimate the SR as mentioned in Algorithm 2 ($Q_{expl}$), and the other to estimate the sum of extrinsic rewards conditioned on the current state similar to TD3 ($Q_{ext}$). Lastly, to update the actor, the gradients are obtained using the overall Q-function which is defined as the linear sum of both Q-values based on extrinsic rewards and entropy-based term: $Q_{total} = Q_{ext} + \beta Q_{expl}$. We have added a pseudo-code of this algorithm in the Appendix of the paper.\
Figure 4 in the rebuttal pdf presents the results on the Sparse MountainCar environment. We compare two metrics- Return and Average Steps taken to reach the goal state across 5 seeds. The Average Steps metric highlights if the agent learns to solve the task with minimal interactions. We plot the mean and 95% confidence interval in shading. Through our experiment on Sparse MountainCar, we demonstrate that the proposed method can improve efficiency in standard RL tasks, especially in sparse reward environments. Future works can leverage the proposed extension in POMDP setting with high-dimensional inputs like images.
2. In this experiment, we wanted to show that $\eta\psi$-Learning can be scaled to high-dimensional space and conduct an experiment on the HalfCheetah environment. We propose to learn successor representations for larger state spaces by using ideas similar to ProtoValueNetworks [3]. To learn SR and predecessor representation, we discretize each dimension of the state space into K bins. Thus, a continuous state can be converted to |S| one-hot vectors where each vector is of K dimensions. The overall entropy and coverage are calculated by averaging the entropy and coverage over each dimension of the state space. We have used MEPOL [4] as the baseline and used the author’s implementation for our experiments. We observed $\eta\psi$-Learning outperforms MEPOL on entropy and coverage metrics when evaluated over a single trajectory of 1000 steps~(Figure 2 of rebuttal pdf). Through this experiment, we wanted to show that the proposed method can be scaled to larger state spaces and believe future work can explore scaling them to more complex environments.
In the above experiments, we show that the method can be applied to tasks with rewards and can be scaled to environments with high-dimensional state spaces and leave it for future work to scale to more complex environments.
We hope we addressed most of the concerns during the rebuttal and would be happy to answer further questions.
#### References
[1] Mutti et al., "The importance of non-markovianity in maximum state entropy exploration." ICML’22.\
[2] Fujimoto et al., "Addressing function approximation error in actor-critic methods." ICML’18.\
[3] Farebrother et al., "Proto-value networks: Scaling representation learning with auxiliary tasks." ICLR’23\
[4] Mutti et al., "Task-agnostic exploration via policy gradient of a non-parametric state entropy estimate." AAAI’21.\
[5] Moskovitz et al., "A First-Occupancy representation for reinforcement learning." ICLR’22.
Pdf: /pdf/0082d22897796dcb85bc12a63c5c17665d8fd992.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions | Accept (poster) | Summary: This paper proposes new convergence results for single-call stochastic extragradient methods under weaker conditions. More specifically, the authors consider quasi-strongly monotone and weak Minty VI problems, both under an unconstrained finite-sum (or arbitrary sampling) setting. The authors propose the expected residual (ER) condition, which extends similar conditions used in stochastic optimization (minimization) to the VI setting. ER is more general than the boundedness assumption on operator noise, and is sufficient for establishing their convergence results. The authors also give sufficient conditions for ER and explain its connections to other widely used technical conditions/assumptions. Using ER, the authors establish convergence results for single-call stochastic extragradient methods for quasi-strongly monotone and weak Minty VI problems. For quasi-strongly monotone problems, two results are given for constant and decreasing step size rules. The authors then give expressions for the ER parameters (which factor into convergence rates) under non-uniform sampling. Numerical experiments complement the theoretical findings.
Strengths: - Clear presentation of results.
- Sufficient discussion of the background and context of the new ER condition.
- First convergence guarantee for SOG under arbitrary sampling.
- First convergence guarantee for SPEG without any bounded variance assumption.
Weaknesses: See **Questions**.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: My main question is around the practicality of the proposed step size rules. In previous work, most (static, non-adaptive) step sizes used in convergence theorems tend to be highly conservative, and more aggressive or adaptive step size rules (which may weaken/break convergence guarantees) need to be used in order to make the algorithms converge in experiments.
In this work, to achieve desirable convergence rates, the step sizes (constant or decreasing) depend on $\delta$, $\mu$, and $L$. My concern is that (i) these constants may be hard to find, or (ii) only conservative, safe estimates can be given. Therefore, the step sizes given by the theorems may be too small. It seems that in all experiments, theoretically safe constants are used. Have the authors considered using more aggressive step sizes which do not follow the theorems entirely (but partially, such as the $1/k^2$ decreasing trend in (11) but with different constants)? If so, it would be helpful to point these out. If the theorems' suggested step size rules are already "near-optimal" in experiments (meaning that further increasing them would lead to non-convergence), it would also be worth pointing out this favorable observation. In fact, if true, this would be such a rarity based on my experience and other work, and should be worth highlighting.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: This is a theoretical and methodological work. The limitations are mainly technical: many practical problems mentioned as a motivation of the work do not (or cannot be easily shown to) satisfy the exact technical assumptions. However, this is typical and not a concern.
I do not see potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a detailed review and positive evaluation. Below, we address questions and concerns raised by the reviewer.
**\[My main question is around the practicality of the proposed step size rules...\]** We appreciate the reviewer's concern regarding the adaptive stepsizes. However, this should not be noted as a weakness of our work. In our work, we focus on the situation when the problem parameters are known since it is important to understand this case first before moving to the adaptive stepsizes. Not all papers on optimization should be about adaptive stepsizes, and it is beyond the scope of our work. There are various practical examples where the constants $L, \mu, \delta$ can be computed. One such example would be the Robust Least Square problem. Check out the Robust Least Square subsection under Numerical Experiments of the paper [Global Convergence and Variance Reduction for a
Class of Nonconvex-Nonconcave Minimax Problems](https://proceedings.neurips.cc/paper/2020/file/0cc6928e741d75e7a92396317522069e-Paper.pdf). The objective function of equation 8 in this paper is a quadratic game, and the values of constants $L, \mu$ and $\delta$ can be easily computed.
**\[...Therefore, the step sizes given by the theorems may be too small...\]** The reviewer writes the step sizes given by Theorems may be too small. However, we want to mention the fact that our analysis of SPEG recovers the best rate of convergence in a deterministic setting for both quasi-strongly monotone and weak minty variational inequality problems. This highlights the tightness of our stepsize choices.
**\[... Have the authors considered using more aggressive step sizes which do not follow the theorems entirely (but partially, such as the $\frac{1}{k^2}$ decreasing trend in (11) but with different constants)?...\]** We did not try the stepsize choice of $\frac{1}{k^2}$ with a different constant. In the paper, we run experiments only to validate our theory. We appreciate this feedback and will add more details on this in the updated version of our work.
**If you agree that we managed to address all issues, please consider raising your score. If you believe this is not the case, please let us know so that we have a chance to respond.**
---
Rebuttal Comment 1.1:
Comment: Not all papers on optimization should be about adaptive stepsizes, and it is beyond the scope of our work.
- I agree that the limited scope of the paper makes sense, and it is not a fair ask to expand it during a short review window. I think a limitation rather than weakness is a better description.
There are various practical examples where the constants can be computed.
- I partially agree to a smaller extend, given the RLS example the authors shared and some other highly stylized examples I thought about. Still, I don't think this is a good example of what's used "in practice" (we do mean different things here), and I believe that more aggressive step sizes will still help a lot for the RLS numerically based on similar problem/experiment settings I tried. I don't know a single example of mature code (optimization solvers, well-maintained open-source code, proprietary code used in the industry, in various application domains such as model training, forecasting, revenue optimization) where the step size follows, even remotely, the proposals in a principled methodological paper with convergence results. In short, almost every theory-grounded step size rule is too conservative and one should use much larger step sizes to boost numerical convergence - this is true in interior-point method solvers, SOTA extensive-form game equilibrium computation code, sparse SVM software, and almost all "real" optimization code I looked at. In these settings, theory-grounded step sizes can indeed be computed - but given its usually way too conservative and merely computing them takes nontrivial time, it's rarely done as such in these solvers/code. (DL training is a different topic and I don't think we want to go there...) Of course, this is a highly subjective opinion of my own based on my own experiences, and therefore it is not fair either to base my rating heavily on this. And it is also beyond the scope of this paper as the authors argued. | Summary: This paper studies the single-call stochastic extragradient methods for solving two classes of structured variational inequality (VI) problems, i.e., (i) quasi-strongly monotone problems and (ii) weak Minty variational problems. These two classes generalize the assumptions of strong monotonicity and comonotonicity to be only satisfied w.r.t. the solution point, respectively. The authors consider the stochastic reformulation of VIs, which allows for mini-batching with arbitrary sampling. The convergence results are built upon the expected residue condition, which can be explicitly implied by the component Lipschitzness and can imply bounded variance. Convergence are analyzed in both settings and with different stepsize strategies. The authors also provide numerical experiments to support their theoretical discussion.
Strengths: 1. The authors generalize the idea from stochastic minimization and propose the expected residue condition, which can be explicitly implied by the Lipschitzness of the component operator, and can be used to provide a variance bound. Thus, this paper does not require a bounded variance assumption nor growth conditions.
2. The authors provide a thorough discussion for the convergence results and detailed comparison with prior works, such as Lines 183--200.
Weaknesses: 1. Although the authors state the stochastic problem in term of the **finite-sum** structure (see Eq. 1 and 5), the convergence analysis is more likely for **infinite-sum** problems, especially when the authors also consider mini-batch in this paper.
2. The expected residue condition proposed in the paper looks restricted because the authors make this assumption directly on the stochastic estimator $g(x)$ instead of the stochastic oracle queries. It seems non-trivial to assume it for the stochastic estimators beyond the mini-batching ones, such as SVRG or SARAH.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Although the weak Minty variational problems look more general than comonotonicity assumption, but I am not sure whether it is more of a theoretical artifact. Could the authors provide some practical examples which satisfy Def 1.2 but are not comonotone?
2. Could the authors clarify the stochastic settings this paper focused on, finite-sum or infinite-sum? I feel confused because $n$ never appears in the convergence results, and the choice of the batch size $\tau$ in Theorem 4.5 (Eq. 15) can essentially be larger than $n$.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please see the weaknesses and questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a detailed review and positive evaluation. Below, we address questions and concerns raised by the reviewer.
**\[...the convergence analysis is more likely for infinite-sum problems, especially when the authors also consider mini-batch in this paper.\]** We consider only the finite-sum structure of the operator in our work. We have rigorous proofs that hold as long as (ER) condition holds. Indeed, it can be satisfied not only for finite-sum problems. However, it also works for the finite-sum case since we show that (ER) holds for finite-sum problems. Could the reviewer point us to the steps where the analysis does not work for finite-sum cases?
**\[...non-trivial to assume it for the stochastic estimators beyond the mini-batching ones, such as SVRG or SARAH.\]** We make Assumption 3.1 directly about $g(x) = F_v(x)$. One can think about $g(x)$ as about oracle call or some other estimator constructed using some procedure (e.g., batching): our analysis holds whenever (ER) is satisfied. We provide multiple examples when (ER) holds. This assumption is not that restrictive as standard bounded variance assumption used in prior works in SEG.
Yes, our work does not capture the variance-reduced algorithms. However, this is not a weakness of our work. We provide the analysis without the bounded variance assumption, and our analysis captures several sampling strategies, including minibatch and importance sampling. However, we will try to have a unified study that will capture variance-reduced algorithms as the future work direction.
**\[...Could the authors provide some practical examples which satisfy Def 1.2 but are not comonotone?\]** We don't have such a practical example in mind. However, we want to highlight that any comonotone problem is a special case of our Def 1.2. Therefore our convergence guarantee holds for comonotone problems as well.
**\[Could the authors clarify the stochastic settings this paper focused on, finite-sum or infinite-sum?...\]** In our work, we focus on the finite-sum problems. However, if the assumptions (in particular, Assumption 3.1) are satisfied, our analysis works for different types of problems as well, e.g., when $F(x) = E_{\xi \sim \mathcal{D}}[F_{\xi}(x)]$ where $\mathcal{D}$ can be continuous distribution.
**If you agree that we managed to address all issues, please consider raising your score. If you believe this is not the case, please let us know so that we have a chance to respond.**
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses, which partially address my concerns. My score stands.
However, the authors did not directly answer my Question 2 where I have doubts about the $O(K)$ batchsizes in Eq. (15), and the authors do not state the stochastic oracle complexity in the main paper. For the complexity, the authors implicitly assume that $n = \Omega(\frac{1}{\epsilon^2})$ to obtain an $\epsilon$-norm solution, which is kind of reduced to the infinite-sum setting. For the other regimes $n = O(\frac{1}{\epsilon^2})$ the result in this paper is loose once the batch size in Eq. (15) is larger than $n$. For the paper [Böhm 2022] the authors compared to in Line 233, they assume the infinite-sum setting and provide the stochastic oracle complexity results. I believe the author should clarify these points in the revision.
---
Reply to Comment 1.1.1:
Title: Clarification on the oracle complexity
Comment: We thank the reviewer for the response.
In Theorem 4.5, we use with-replacement batching meaning. Therefore, technically, this result allows the case of $n < K$. As we mentioned in our response, the proofs also work for the expectation case (infinite-sum).
To get the complexity we need to choose $C = \frac{48}{\omega\gamma(1-L(\gamma + 4\omega))}$, $K = O\left(\frac{C|| x_0 - x^{\ast} ||^2}{\epsilon}\right)$ and $\tau = \max\left\lbrace 1, \frac{32 \delta}{(1 - L \gamma) L^3 \omega}, \frac{48 C \omega \gamma \delta || x_0 - x^{\ast}||^2}{(1 - \gamma L)^2 \epsilon}, \frac{2 C \omega \gamma \sigma_\ast^2 }{(1 - L \gamma) \epsilon} \right\rbrace$ and the oracle complexity will be
$$K\tau = O\left(\max\left\lbrace \frac{C|| x_0 - x^{\ast} ||^2}{\epsilon}, \frac{32 C \delta || x_0 - x^{\ast} ||^2}{(1 - L \gamma) L^3 \omega \epsilon}, \frac{48 C^2 \omega \gamma \delta || x_0 - x^{\ast}||^4}{(1 - \gamma L)^2 \epsilon^2}, \frac{2 C^2 \omega \gamma \sigma_\ast^2 || x_0 - x^{\ast}||^2}{(1 - L \gamma) \epsilon^2} \right\rbrace\right).$$
In particular, one can choose $\gamma = \max\lbrace \rho, \frac{1}{4L} \rbrace + \frac{1}{2L}$ and $\omega = \frac{1}{2}\min\lbrace \gamma - 2\rho, \frac{1}{4L} - \frac{\gamma}{4} \rbrace$.
We will add the remark about the infinite-sum case and also the complexity bounds to the final version of our paper. | Summary: This work studies single-call stochastic extra-gradient method for quasi strongly monotone and weak Minty Variational Inequality (VI). They relax the commonly-used bounded noise variance assumption and used the expected residual condition.
Strengths: The paper is well-written and the problem is relevant.
Weaknesses: In short, I think the technical novelty in this paper is minimal. I substantiate my claim below.
1. It introduces expected residual condition which has been studied before in SGD literature as the authors correctly state. So the condition itself is not a contribution. **As the authors agree (line 173-175), the main difference between this work and [29] is that (8) is an Assumption in [29] whereas Assumption 3.1 implies (8) in this work. But it is easy to see that the proof of Lemma 3.2 just requires a single use of Young's inequality (and indeed, that's how it's done in the Appendix).**
2. **Authors in [29] use another condition** $ E || g(x) − F(x) ||^2 \leq (a||x − x^*|| + b)^2 $ **which is extremely similar to ER condition. But in line $171$, the authors claim that the constants $a$ and $b$ are not available in closed form. But this is a trivial result (Proposition 3.3). These calculations are routinely done in literature involving stochastic gradient as the authors themselves agree (line 153, [24-25]).**
3. **Use of ER condition relaxes the bounded condition on the noise variance but the additional technical innovation needed, compared to the bounded noise variance, is minimal.** Familiarity with the stochastic extra-gradient proof techniques right away reveals that the RHS of ER condition, $||x-x^*||_2^2$, is designed to be subsumed in other terms appearing in the proof which effectively leaves the proof almost same as the bounded variance case.
Specifically, **note that ER condition only introduces the term** $\omega^2\delta || \hat{x}_k - x^* ||_2^2$ **in the 5th line of the display appearing after line 664. But there is another term** $ -2 \omega \mu ||\hat{x}_k-x^*||_2^2$. **So the convergence can be guaranteed if** $\omega $ **is chosen small enough such that** $2\omega\mu ||\hat{x}_k-x^*||_2^2> \omega^2\delta ||\hat{x}_k-x^*||_2^2$. **This is almost trivial.**
In fact, one can easily think of allowing for bigger bounds for the noise variance keeping the proof unimpacted. For example, convergence can be easily shown if one allows for a noise variance bound like $\omega^{-1/2}\delta ||\hat{x}_k-x^*||_2^2$. **All the proofs depend on this simple and trivial extension.**
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see the weakness section. If you could elaborate on the novelty that would be great.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a detailed review. Below, we address questions and concerns raised by the reviewer.
**\[...the condition itself is not a contribution\]** Here, we want to highlight that the ER condition in minimization literature involves the functional value, i.e. the right-hand side is $f(x) - f^\ast$ (check Assumption 3.1 [SGD for Structured Nonconvex Functions: Learning Rates,
Minibatching and Interpolation](http://proceedings.mlr.press/v130/gower21a/gower21a.pdf)). However, there is no notion of function values in VIPs. Therefore, we have replaced the $f(x) - f^\ast$ with $||x - x^\ast||^2$, which completely changes the proof technique compared to the minimization setup.
**\[...the main difference between this work and [29] is that (8) is an Assumption in [29]...\]** Yes, authors of [29](https://arxiv.org/pdf/2003.10162.pdf) also use similar conditions as we have mentioned in our work. But the reviewer ignores the setup considered in their paper. [29](https://arxiv.org/pdf/2003.10162.pdf) considers the extragradient method with two oracle calls and solves for operators satisfying error-bound condition, whereas we consider the single-call method for solving quasi-strongly monotone and weak minty variational inequalities. Even though the conditions are similar, our work is completely different in other aspects.
**\[...These calculations are routinely done in literature involving stochastic gradient...\]** Yes, this kind of computation is not new; similar calculations exist in the literature, which we have also mentioned in our paper. However, we don't consider it a weakness of our work. We want to highlight that this was never done for the single-call extra gradient method for solving VIPs. As mentioned in the previous answer, our ER condition is a modification of the ER condition introduced in the minimization setup. Therefore, no work precisely computes the constants like $\delta$ of our ER. Moreover, we are the first to capture different sampling strategies (like minibatch, importance or any other single-element sampling) of the single-call method for solving VIPs.
**\[...ER condition only introduces the term $\omega^2 \delta ||\hat{x_k} - x^\ast||^2$ in the 5th line of the display appearing after line 664...\]**
- The reviewer rrNt argues ER condition only introduces the extra term $\omega^2 \delta ||\hat{x_k} - x^{\ast}||^2$, which can be handled easily. We politely disagree with the statement and ask the reviewer to check the 5th line after line 664 again. Reviewer rrNt completely ignores the term $\omega^2\delta||\hat{x_{k-1}} - x^\ast||^2$, which ER also introduced. Moreover, this term can not be canceled following the procedure mentioned by rrNt. Indeed, for handling such terms, we have introduced $||x_{k+1} - \hat{x}_k||$ in the Lyapunov function under consideration.
- The reviewer also overlooks the different stepsizes proposed in our work. There were no switching stepsizes for single-call methods, even under the bounded variance assumption, to get exact convergence.
- Moreover, reviewer rrNt also ignores our contributions for weak minty VIPs. We analyse stochastic single-call methods for $\rho < 1/2L$, improving the previous restriction on $\rho$.
**We believe a score of 2 is too harsh for our work, and it definitely deserves better. If you agree that we addressed all issues, please consider raising your score. If you believe this is not the case, please let us know so that we have a chance to respond.**
---
Rebuttal Comment 1.1:
Title: Thanks for your answers.
Comment: Thank you for your detailed responses.
**[...the condition itself is not a contribution]** Similar assumption has been assumed in [29] as well. It is well known in saddle-point optimization literature (and VI literature) that function value-based assumptions or Lyapounov functions do not make sense and it has to be based on $||x\_t-x^*||_2^2$. The required changes in the proof techniques are extremely well-studied (see A2.b in [29], A 1.2. in [3] for example).
**[...the main difference between this work and [29] is that (8) is an Assumption in [29]...]** [29]'s work is slightly different but my point is there is significant overlap that makes the novelty of this paper incremental.
**[...These calculations are routinely done in literature involving stochastic gradient...]** I agree that the computation of $\delta$ was not explicitly done in previous literature. One reason for that is that the computation of $\delta$ is trivial given the finite-sum setup and requires basic algebraic manipulations.
[...ER condition only introduces the term....in the 5th line of the display appearing after line 664...] These terms are easily controllable under ER condition. One main issue with Stochastic EG (SEG) is that with independent samples for the two steps of SEG, the algorithm will not converge even for monotone operators as proved in Proposition 1 of [29]. ER condition helps to alleviate this issue by providing a control over the variance proportional to the distance to the optimal point.
**[misleading claims]** It is claimed in line 183-184 that "To the best of our knowledge, the above theorem is the first result on the convergence of SPEG that does not rely on the bounded variance assumption." This is simply not true. [29] has the unbounded variance assumption as well.
I agree with the switching step-size and weak minty VIPs novelty.
Overall: I agree that there are some novelties of the paper for which I am increasing my score by 1. But I still feel that the work (mainly the proof techniques required to establish the results) is too incremental to be considered for a venue like NeurIPS.
[29] https://arxiv.org/pdf/2003.10162.pdf
[3] https://arxiv.org/pdf/2111.08611.pdf
---
Reply to Comment 1.1.1:
Title: Response to rrNt
Comment: ### **Response to further comments:**
**\[...the condition itself is not a contribution\]** We are not sure which part reviewer rrNt is referring to in A 1.2 of [3] and A 2.b. of [29].
**\[...the main difference between this work and [29] is that (8) is an Assumption in [29]...\]** No, our work is not just slightly different from [29]. As mentioned in our earlier response, we solve a completely different class of problems (quasi-strongly monotone and weak minty) in the paper. Moreover, the algorithm analyzed in our paper uses single oracle calls in contrast to the two oracle calls used in [29]. Only the assumptions of stochastic estimators are closely related.
**\[...ER condition only introduces the term....in the 5th line of the display appearing after line 664…\]** As mentioned earlier, analysis of the single-call method was done in the literature only under the bounded variance assumption. Our previous response also pointed out the difficulties of analyzing the SPEG. The analysis is not trivial. Moreover, we would like to emphaisize our contribution to solving weak minty problems for the reviewer once again. There does not exist any analysis of extragradient methods (both single and double oracle versions) without bounded variance assumption for solving weak minty problems. This work is the first to provide a convergence guarantee of extragradient methods without the bounded variance assumption for solving weak minty VIPs. We request the reviewer to check the proof techniques of weak Minty VIPs. We hope rrNt will recognize the difficulties and increase our score.
**[misleading claims]** We disagree with the reviewer and stand by our claim. We ask the reviewer to check the algorithm in [29]. [29] analyses the stochastic extragradient method, which uses two oracle calls per iteration, while SPEG uses one oracle call per iteration. Therefore, as the paper claims, this is the first work that analyses the single-call extragradient method without bounded variance assumption, and our claim is **not misleading**.
------------------------------------------------------------------------------------------------------------
### **On novelty of our work:**
Reviewer rrNt, mentions the following at the end of the review: “I still feel that the work (mainly the proof techniques required to establish the results) is too incremental to be considered for a venue like NeurIPS”.
We respectfully stand by our claim of novelty (please check our full response) and we politely disagree with the reasoning of the above statement. In our opinion, the fact that part of the proof techniques is an extension of previous works is **no reason for suggesting rejection of a paper**.
**With our work we answer several open questions in the performance of SPEG (see full response and also the last message to rev. Bwgk) for solving structured non-monotone VIPs.**
**We hope we have responded to the questions of reviewer rrNt appropriately. We still think a score of 3 is too low to recognize our contributions. We politely ask the reviewer rrNt to reevaluate our work.** | Summary: The paper explores single-call stochastic extragradient methods like stochastic past extragradient (SPEG) and stochastic optimistic gradient (SOG), which are increasingly popular and efficient algorithms for tackling large-scale min-max optimization and variational inequalities problems (VIP) commonly found in diverse machine learning tasks. Notwithstanding their growing popularity, the current convergence analyses of SPEG and SOG demand strong assumptions like bounded variance or growth conditions. Moreover, numerous key questions related to the convergence properties of these methods, such as mini-batching, effective step-size selection, and convergence guarantees under various sampling strategies, remain unaddressed.
This research endeavors to answer these questions, presenting convergence guarantees for two extensive classes of structured non-monotone VIPs: (i) quasi-strongly monotone problems, which extend strongly monotone problems, and (ii) weak Minty variational inequalities, which expand upon monotone and Minty VIPs. The paper introduces the expected residual condition and discusses its advantages, demonstrating how it enables a strictly weaker bound than those achieved by previously employed growth conditions, expected co-coercivity, or bounded variance assumptions. Lastly, the presented convergence analysis is applicable under the arbitrary sampling paradigm, encompassing special cases like importance sampling and various mini-batching strategies.
Strengths: The main contributions of the paper include the following:
- **Expected Residual:** The authors introduce the Expected Residual (ER) condition for stochastic variational inequality problems. The ER condition is used to derive an upper bound on $E\|g(x)\|^2$ (as detailed in Lemma 3.2), offering a strictly weaker alternative to the bounded variance assumption and previously used "growth conditions" for the analysis of stochastic algorithms. The paper shows that the ER condition holds for a large class of operators, specifically when the $F_i$ of the problem are Lipschitz continuous.
- **Novel Convergence Guarantees:** The paper presents novel convergence guarantees for Stochastic Past Extragradient (SPEG) without the need for a bounded variance assumption in the cases of quasi-strongly monotone and weak Minty Variational Inequalities (MVI). This is achieved through the use of the proposed ER condition. For the class of quasi-strongly monotone Variational Inequalities Problems (VIPs), the paper demonstrates a linear convergence rate to a neighborhood of $x^*$ when constant step-sizes are utilized. Furthermore, theoretically motivated step-size switching rules are provided that guarantee exact convergence of SPEG to $x^*$. In the weak MVI case, the convergence of SPEG is proved for $\rho < 1/2L$, improving existing restrictions on $\rho$. The authors compare their results with the existing literature in Table 1.
- **Arbitrary Sampling:** By reformulating the variational inequality problem stochastically, the authors explain how their convergence guarantees for SPEG hold under the arbitrary sampling paradigm. This approach allows them to cover a wide range of sampling strategies for SPEG not previously considered, including mini-batching, uniform sampling, and importance sampling. Therefore, their analysis of SPEG is unified for different sampling strategies. The authors also demonstrate the tightness of their analysis by showing that the best-known convergence guarantees of deterministic Past Extragradient (PEG) for strongly monotone and weak MVI can be obtained as special cases of their main theorems.
Weaknesses: - **Redundant Main Result:** The primary outcome of this research could essentially be seen as a single-call version of "Stochastic Extragradient: General Analysis and Improved Rates," but under simpler guarantees. This could potentially question the novelty of this research.
- **Uncertainty of Expected Residual (ER):** The Expected Residual (ER) is proposed as a new and beneficial condition, however, it is unclear whether this condition is indeed intuitive and interesting in practice. It would be beneficial if the authors could provide more justification or insights on the relevance of the ER condition in practical scenarios.
- **Importance of Single-Call Methods:** While the paper puts substantial focus on single-call methods, it is unclear whether these methods are significantly important in practical scenarios. Theoretical advantages are evident, but it would be constructive if the authors could provide more practical motivations or empirical examples where single-call methods bring notable improvements.
- **Comparison with Existing Methods:** If Stochastic Extragradient (SEG) methods already have satisfactory rates and guarantees, it is intuitive to assume that Stochastic Optimistic Gradient (SOG) methods would offer similar advantages. Therefore, a clear delineation between the contributions of the proposed method and existing ones would be beneficial for the reader.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Address the weaknesses' section
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a detailed review and positive evaluation. Below, we address questions and concerns raised by the reviewer.
**\[... question the novelty of this research\]** We politely disagree with this remark. Our work on the single-call method uses a different proof technique compared to [Gorbunov et al. 2021](https://proceedings.mlr.press/v151/gorbunov22b/gorbunov22b.pdf) for proving the convergence. Analysing single-call methods involve dealing with the term $E||F_{v_k}(\hat{x_k}) - F_{v_{k-1}} (\hat{x_{k-1}})||^2$ (however, these do not appear in the analysis of the Extragradient method). Moreover, note that the Lyapunov function used to analyse our method is $E(||x_{k} - x^\ast ||^2 + ||x_{k} - \hat{x_{k-1}}||^2)$ in contrast to $E||x_{k} - x^\ast||^2$ for the Extragradient method. Moreover, we provide convergence guarantees for a larger class of non-monotone problems, i.e. weak Minty variational inequality, while Gorbunov et al. 2021 solve only quasi-strongly monotone problems. Therefore, our work is not just a simple extension of Gorbunov et al. 2021.
**\[...provide more justification or insights on the relevance of the ER condition in practical scenarios.\]** As discussed in the paper, the Expected Residual is a more relaxed condition than the bounded variance assumption (bounded variance is used to provide convergence guarantees for the stochastic single-cell methods). Moreover, Expected Residual is not an assumption and holds for free whenever the operators $F_i$ are Lipschitz. However, there are several examples where bounded variance may not hold. We provide one such example here:<br>
<br>Consider the simple linear regression problem: $$\min_{x \in \mathbb{R}} f(x) := \frac{1}{2} (a_1x - b_1)^2 + \frac{1}{2} (a_2x - b_2)^2$$ where $x \in \mathbb{R}$ and $f: \mathbb{R} \to \mathbb{R}$. Here let us denote $f_1(x)=(a_1x - b_1)^2$ and $f_2(x) = (a_2x - b_2)^2$. Now consider the estimator $g(x)$ of $\nabla f(x)$ under uniform sampling i.e. $g(x)$ takes the value $\nabla f_1(x)$ with probability $\frac{1}{2}$ and $\nabla f_2(x)$ with probability $\frac{1}{2}$. Then we have $$\mathbb{E} ||g(x) - \nabla f(x)||^2 = \frac{1}{2} ||\nabla f_1 (x) - \nabla f(x)||^2 + \frac{1}{2} ||\nabla f_2(x) - \nabla f(x)||^2 = \frac{1}{2} \cdot \frac{1}{4}||\nabla f_1 (x) - \nabla f_2(x)||^2 + \frac{1}{2} \cdot \frac{1}{4}||\nabla f_2 (x) - \nabla f_1(x)||^2 = \frac{1}{4} ||\nabla f_1(x) - \nabla f_2(x)||^2 = \frac{1}{4} \left( 2(a_1x - b_1)a_1 - 2(a_2x - b_2)a_2 \right)^2 = \left( (a_1^2 - a_2^2) x - (a_1b_1 - a_2b_2) \right)^2.$$ Thus, the expression $\mathbb{E} ||g(x) - \nabla f(x)||^2$ is a quadratic function of $x$. Hence, as $x \to \infty$, we have $\mathbb{E} ||g(x) - \nabla f(x) ||^2 \to \infty$ and the variance can not be bounded by a constant. On the other hand, this function has a Lipscihtz gradient, and from our results, ER and condition (9) hold for free. In addition, an analysis under Expected Residual allows us to capture the performance of stochastic methods for various sampling strategies used in practice. For example, practitioners frequently use minibatch or importance sampling to train machine learning models. Our analysis with Expected Residual allows us to explicitly derive the convergence rates under different sampling strategies. However, bounded variance does not capture the analysis under such sampling strategies.
**\[...it is unclear whether these methods are significantly important in practical scenarios...\]** The single-call extragradient method requires two oracle calls per iteration in contrast to the two oracle calls by the extragradient method. In order to make a fair comparison of the two methods, we should count the number of oracle calls required to achieve a given accuracy. We compared the stochastic extragradient method of [Gorbunov et al. 2021](https://proceedings.mlr.press/v151/gorbunov22b/gorbunov22b.pdf) with the stochastic past extragradient method. The number of oracle calls required by the single-call method was less than the extragradient method. We have provided a plot (.pdf file comparing SPEG and SSEG) to compare the two methods in th General Response to the reviewers. In the attached figure, the Stochastic Past extragradient requires 2000 oracle calls, while the stochastic extragradient method requires more than 3000 oracle calls for convergence. It highlights the lower computational complexity of the single-call method and provides empirical evidence of notable improvements for these methods.
**\[...a clear delineation between the contributions of the proposed method and existing ones would be beneficial for the reader.\]**
- As discussed in answer to the previous question, the empirical evidence shows that the single-call method can have lower computational complexity than the SEG methods. It motivates us to study the single-call methods in more detail.
- We want to highlight that, previously, researchers used bounded variance to study single-call methods. However, we have provided an example in the previous answer of why this may not hold in simple scenarios. We lift such unrealistic assumptions and work under Expected Residual in our work.
- Furthermore, we provide convergence guarantees of SPEG for various sampling strategies. We also provide empirical evidence to show the advantage of using importance sampling over uniform sampling for SPEG.
- Moreover, we provide convergence guarantees of the stochastic single-call method for solving weak minty variational inequality problems with $\rho < \frac{1}{2L}$. It also improves the existing restriction on $\rho$ for stochastic methods. Previously, the best-known result used bounded variance assumption with $\rho < \frac{3}{8L}$ to provide a convergence guarantee of the stochastic method.
**If you agree that we managed to address all issues, please consider raising your score. If you believe this is not the case, please let us know so that we have a chance to respond.**
---
Rebuttal Comment 1.1:
Title: Answer
Comment: I have similar opinion with Reviewer rrNt.
I don't have more questions. We will discuss later in the committee about the incremental or not of the contribution. Would you like to add about this aspect?
---
Reply to Comment 1.1.1:
Title: Final Comments
Comment: Thanks for the response. We want to mention that reviewer rrNt completely ignores our contribution to solving weak minty VIPs. Moreover, reviewer rrNt argues that the work of [29] is similar to ours. However, the ER condition and its variants are the only similarity between [29] and our work. The algorithm considered in our work (single-call version) and the classes of problems we solve differ entirely from what is considered in [29]. We have also mentioned the difficulties of analyzing the single-call method under ER conditions in our response to rrNt. The analysis of the single-call method under ER is more complex than rrNt states.
For us it is clear that with our work we answer several open questions in the literature of solving VIPs and in particular for the performance of SPEG (one of the most popular algorithms in the literature). As a result our contribution can be significant for the ML community.
Before our work the following questions were open:
1. What is the convergence performance of SPEG in the quasi-strongly monotone (3) and weak MVI (4) cases without assuming bounded variance assumption?
2. Can we relax the assumptions on stochastic estimators? (We use ER condition, which holds for free when the operators are Lipschitz.)
3. What are the beneficial step-size selection of SPEG in order to have faster practical performance for quasi strongly monotone and weak MVI VIPs?
4. Can we analyze SPEG beyond the classical uniform sampling? That is, can arbitrary sampling and importance sampling being used to improve the performance of the method and if yes how the convergence guarantees will capture that scenario?
5. Via numerical experiments we verify the tightness of our theoretical results. That is we show that the proposed step-size selections behave exactly as the theory suggested, we numerically evaluate the benefis of importance sampling (comapre to previou use uniform sampling) and we also show that our proposed theoretical results lead to faster convergence compared to the work [28].
Having the above open questions in mind we believe that our work fully justifies acceptance to NeurIPS and we are opposite to the argument of rev rrNt that “feel that the work (mainly the proof techniques required to establish the results) is too incremental to be considered for a venue like NeurIPS”. We politely disagree with the reasoning of the above statement. In our opinion, the fact that part of the proof techniques is an extension of previous works is **NO reason for suggesting rejection of a paper**. The contributions of our work (answer major open questions to the area) are significant.
We hope reviewer Bwgk will consider the abovementioned points while making the final decisions and even consider increasing their original score to support our work. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback and time. In particular, we appreciate that the reviewers acknowledged the following strengths of our work:
- Reviewer rrNt acknowledges the problem (considered in our work) is relevant and needs to be addressed.
- All the reviewers identify relaxing assumptions like bounded variance or growth conditions as one of the main strengths of our work.
- Both Bwgk and dTzE recognize our work as the first to provide convergence analysis of single-call methods under arbitrary sampling.
- Reviewer Bwgk acknowledges the improvement provided in our work for solving weak minty VIPs (we improved the restriction on $\rho$ in the stochastic setting to provide convergence).
- All the reviewers think the paper is well-written, and we have done a good job presenting the prior literature and providing thorough discussion.
- Reviewers bysm and dTze appreciate our numerical experiments to support the theoretical findings.
The reviewers also have several questions and concerns that we address in our responses to each reviewer. We have attached a plot called SSEGvsSPEG.pdf here. This plot is for reviewer **Bwgk** to answer one of his questions. You can find more details about the plot in the response to **Bwgk**.
If the reviewers have further questions/concerns/comments, we will be happy to participate in the discussion.
Pdf: /pdf/6e794810cf6b963478e184292d4c5373d954a9d3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Federated Compositional Deep AUC Maximization | Accept (poster) | Summary: This work aims to address the challenges of imbalanced data in FL. To this end, the authors propose to optimize AUC score. Some experiments are conducted to verify the effectiveness of the proposed method.
Strengths: The paper is easy to follow. The notations are well-defined. The studied problem is promising.
Weaknesses: This work confuses me a lot.
The authors believe that data imbalance is a crucial issue under the FL scenario, which is consistent with my understanding. The problem has motivated the data-split fashion, i.e., Latent Dirichlet Sampling [1]. Moreover, many excellent works have demonstrated the effectiveness of their efforts in mitigating data heterogeneity [2, 3]. However, I cannot find these works in this work. This indicates that the authors may overlook some advanced methods in this field.
[1] Measuring the effects of non-identical data distribution for federated visual classification
[2] Federated optimization in heterogeneous networks
[3] SCAFFOLD: Stochastic controlled averaging for federated learning
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: cf. Weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: cf. Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's comments and suggestions. We address the reviewer’s comments below.
First, our method is significantly different from the heterogeneous federated learning approaches. Specifically, most existing heterogeneous federated learning approaches consider a setting where **the local distribution is imbalanced but the global distribution is balanced**, i.e., different clients have different data distributions but the combination of all clients' data is balanced. On the contrary, our work considers a setting where **both the local and global distributions are imbalanced**, which is much more challenging than existing heterogeneous federated learning methods.
Second, we have cited [2] and [3] in Lines 112-118. We will cite and discuss [1] as the reviewer suggested.
Third, we have compared the SOTA method that is designed to address the global imbalance issue, i.e., LocalSGDM-RL [27]. Our method outperforms this strong baseline algorithm with a large margin for all datasets.
**Hope we have answered all your questions. We sincerely hope the reviewer can consider our response and re-evaluate the contributions of our work. In fact, we considered a more challenging setting and provided a novel solution to address this challenge. Both theoretical and empirical results confirm the effectiveness of our method. We believe our contributions are important to this area and our work can inspire more follow-up works to handle more challenging federated learning settings.**
---
Rebuttal Comment 1.1:
Title: Further comments
Comment: Hi,
Thanks for the careful responses! Some of my concerns are addressed. However, it is still unclear why FedProx and Scaffold are not considered baseline methods. I suggest the authors add the corresponding results since existing methods can solve the studied imbalance problem partially.
A following up confusion, based on the response, is that the mentioned theoretical results show the convergence rate, why it can confirm the effectiveness of the proposed method?
---
Reply to Comment 1.1.1:
Comment: Thanks for the reviewer's comments.
**Q1**. …I suggest the authors add the corresponding results since existing methods can solve the studied imbalance problem partially.
**A1**: (1) Please note that our work focuses on optimizing AUC for federated learning. Therefore, we compared with strong and direct baselines of federated learning for optimizing AUC, including CoDA [1] and LocalSGDAM. As prior works [2, 3] have demonstrated that optimizing traditional cross-entropy (CE) loss will yield worse results than AUC maximization for imbalanced data, we tried not to include too many baselines that optimize CE loss in FL. (2) We would like to draw reviewer's attention that CoDA leverages the epoch-wise proximal term as in FedProx and the variance-reduction as in Scaffold for solving the min-max formulation of AUC maximization. Therefore, CoDA is much a stronger baseline than FedProx and Scaffold. (3) Nevertheless, we have conducted an experiment to compare with Scaffold and FedProx for optimizing CE loss on STL10 dataset with $p=4$, and the results are shown below. It can be observed that our algorithm can outperform those two baselines with a large margin.
| | Scaffold | FedProx | CoDA | LocalSCGDAM(Ours) |
|-----|----------|---------|-------|-------|
| AUC | 0.788 | 0.778 | 0.801 | 0.820 |
**Q2**. …why it can confirm the effectiveness of the proposed method?...
**A2**. The effectiveness of the proposed method can be explained below. (1) We optimize the compositional AUC formulation, which has been shown to learn much better feature representation than conventional AUC maximization [4]. (2) Our algorithm design and theoretical analysis ensure that our algorithm has a small sample complexity. This is also important as with the same number of epochs, an algorithm with a higher sample complexity may find a worse solution. Therefore, our theoretical analysis ensures that our algorithm can quickly find a good solution to compositional AUC maximization. Together, it can explain the effectiveness of our algorithm. (3) We would like to emphasize that it is not trivial to attain the $O(1/(K\epsilon^4))$ complexity for solving the compositional AUC loss as in our paper. For instance, [5, 6] show that the stochastic gradient descent (SGD) or stochastic compositional gradient descent (SCGD) algorithm under federated learning setting has the sample complexity of $O(1/\epsilon^{8})$ or $O(1/\epsilon^{5})$, which cannot match the sample complexity $O(1/(K\epsilon^{4}))$ of SGD for non-compositional optimization problems. In our theoretical analysis, we show that our algorithm LocalSCGDAM can achieve the sample complexity of $O(1/(K\epsilon^{4}))$, which can match that of traditional federated learning algorithms for non-compositional optimization problems.
[1] Guo et al., Communication-efficient distributed stochastic auc maximization with deep neural networks. ICML 2020.
[2] Yuan et al., Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification. ICCV 2021.
[3] Liu et al., Stochastic AUC Maximization with Deep Neural Networks. ICLR 2020.
[4] Yuan et al., Compositional Training for End-to-End Deep AUC Maximization. ICLR 2022.
[5] Huang et al., "Compositional federated learning: Applications in distributionally robust averaging and meta learning." arXiv preprint arXiv:2106.11264 (2021).
[6] Wang et al., "Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and Personalized Federated Learning." JMLR 24 (2023): 1-46. | Summary: This paper firstly studies the federated compositional AUC maximization problem, which includes both the local and global imbalanced distributions, and proposes the momentum-based algorithm LocalSCGDAM to solve this problem. The SOTA convergence rates are established and various experiments are used to evaluate the proposed algorithms.
Strengths: Novelty: This paper is the first work to consider the federated compositional AUC maximization problem. Data heterogeneity is a key problem in federated learning and many works focus on it. The Federated compositional AUC maximization problem is more challenging because it considers both the local and global imbalanced distributions.
Quality: It proposes a new algorithm to solve the problem. The structure of the algorithm is clear. Both theoretical analysis and experimental verification are provided. The theoretical analysis is very complete.
Clarity: The paper is organized well and it is easy to follow.
Weaknesses: 1. More motivation should be introduced. It includes why 1) AUC is significant and 2) why FL AUC should be considered.
2. In the experiments, all samples are set to 0.1. Discussion about different imbalanced data settings is welcomed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In the math, $\mathcal{D}^k_{g}$ and $\mathcal{D}^k_{n}$ are clear and they are the datasets in the inner layer and out layer. But in the AUC maximization, how to define these datasets?
2. This work solves the binary imbalanced data distribution problem well. I am curious whether it is possible to extend it to the multiple-class imbalanced data distribution problem.
3. In the nonconvex optimization, we consider the convergence rate, including sample complexity and communication complexity, to reach an $\epsilon$-stationary point. In this case, what is the convergence rate of your algorithm?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's comments and suggestions. We address the reviewer’s comments below.
Answers for weakness.
**1**. First, when the data distribution is imbalanced, directly minimizing the cross-entropy loss function cannot learn a good classifier since it may ignore the minority class. On the contrary, the AUC loss function focuses on both majority and minority classes, which can alleviate the limitation of traditional cross-entropy loss function [31]. Thus, we leverage the AUC maximization to address the imbalance issue, which has been confirmed by our experiments, i.e., LocalSCGDAM outperforms LocalSGD.
Second, the AUC loss function is a minimax function, which is more difficult to optimize than the traditional convex cross-entropy loss function. Thus, we developed the LocalSCGDAM algorithm to optimize the compositional AUC loss function, which incorporates the pertaining process as shown in Eq.(2). As such, the prediction performance is better than directly optimizing the AUC loss function, which is confirmed by our experiments, i.e., LocalSCGDAM outperforms CoDA and LocalSGDAM.
Third, FL is an effective tool for real-world data analysis tasks and data distributions in those tasks are typically imbalanced, e.g., the electronic health record (EHR) data. Thus, enabling federated learning for AUC optimization is necessary and important to address real-world learning challenges.
**2**. Thanks for the reviewer's suggestion. In fact, we have already conducted this experiment in Appendix. The results in Figure 4 has confirmed the superior performance of our method for different imbalanced ratios.
Answers for questions.
**1**. As shown in Eq. (2), the inner-level function is to pre-train the classifier by minimizing the cross-entropy loss function, and the outer-level function is to learn the classifier via optimizing the AUC loss function. Both of them use the training samples and their labels. In other words, $\mathcal{D}_f$ and $\mathcal{D}_g$ in compositional AUC maximization denote the two copies of the training dataset.
**2**. It is easy to extend the binary classification case to the multi-class case. Please see Eq.(4) in [16].
**3**. Based on Theorem 1, to reach the $\epsilon$-stationary point, i.e., $\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}\left[\left\|\nabla \Phi\left(\overline{\mathbf{x}}_t\right)\right\|^2\right]\leq \epsilon^2$ the convergence rate is $O(\frac{1}{K\epsilon^4})$, the sample complexity on each client is $O(\frac{1}{K\epsilon^4})$, the communication complexity is $O(\frac{1}{\epsilon^3})$.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response and I keep my score. | Summary: The paper proposes a new federated learning algorithm to address the class imbalance problem. Instead of using cross-entropy loss functions, the proposed algorithm directly optimizes the AUC score by solving a federated stochastic compositional minimux optimization problem. Specifically, the paper proposes to employ the local stochastic gradient with momentum to update the local model parameters. Experiments show that the proposed algorithm achieves better model performance than the other baselines.
Strengths: 1. The paper considers a setting where the global distribution is also class-imbalanced, which is interesting.
2. The paper provides a theoretical analysis of the convergence of the proposed algorithm.
Weaknesses: 1. There are many FL studies that try to address non-IID challenge in FL. However, the baselines seem to focus on different optimization methods and lack SOTA FL studies that address the non-IID data (e.g, [1][2]).
[1] Addressing class imbalance in federated learning
[2] No fear of heterogeneity: Classifier calibration for federated learning with non-iid data
2. The theoretical analysis requires assumptions on the outer-level and inner-level functions. I’m not sure how realistic these assumptions are.
3. The algorithm estimates the inner-level function in local training. It may not be applicable in the client sampling setting, where a client may be selected after many rounds.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is existing FL approaches applicable with LocalSCGDAM? Existing studies usually optimize the standard cross-entropy loss. I’m curious whether they are applicable to the studied loss function and how they compare with LocalSCGDAM.
2. Is the experimental setting satisfies the assumptions in Section 3.2?
3. Is the algorithm applicable in the client sampling setting? Also, how many clients are used in the experiments? I suggest the authors to add experiments to study the scalability.
4. How does the convergence rate of LocalSCGDAM compare with other related studies?
5. Is directly optimizing AUC loss a mainstream method in class-imbalanced learning?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: One potential limitation is that the algorithm may not work well in the client sampling setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's comments and suggestions. We address the reviewer’s comments below:
Answer for weakness.
**1**. Firstly, our method is significantly different from the heterogeneous federated learning methods. Specifically, our paper aims to address a more challenging issue than the traditional FL methods designed for non-IID data.
More specifically, even though some traditional FL methods try to address the non-IID issue, they typically assume **the global data distribution (i.e., combining the data of all clients) is balanced**. For instance, the reference [2] that the reviewer provided uses Dir distribution to simulate the non-iid data, where different clients have different distributions but the global distribution is balanced. On the contrary, our method and the reference [1] that the reviewer provided assume **the global distribution is imbalanced**, which is much more challenging. To address this challenging task, we developed a federated compositional AUC maximization model to handle the global imbalance issue, which has shown superior performance according to our experimental results.
Secondly, we have compared with [1] in our experiments, which is denoted by LocalSGDM-RL in Line 257. All experimental results have shown that our algorithm can outperform [1] by a large margin.
**2**. Those assumptions are common in compositional optimization problems. Especially, the single-machine counterpart algorithms [4, 35] use the same assumptions. We do not have any strong assumptions compared with them. In fact, these assumptions are mild and the model used in our experiment, which is also used in [35], satisfies those assumptions.
**3**. It is trivial to extend our algorithm to the client sampling setting. Each selected client updates the inner-level function estimator and then computes stochastic compositional gradient to update model parameters, which does not require special operations under the client sampling setting. We will provide discussions about this point in the final version.
Answer for questions.
**1**. Existing FL approaches learn the classifier via minimizing the cross-entropy loss function, which is a **minimization** problem. On the contrary, the AUC loss function used in this paper is a **minimax** loss function. For minimization problems, one should use the **stochastic gradient descent** algorithm, while the **stochastic gradient descent ascent** algorithm should be used for minimax problems. Since LocalSCGDAM is designed for a compositional minimax problem, not a minimization problem, the standard FL approaches cannot be applied to the compositional minimax problems.
**2**. Please see the answers for weakness 2.
**3**. Please see the answers for weakness 3. We use four clients currently. We will use more clients to verify the performance in the final version.
**4**. As shown in Remark 1, the convergence rate is $O(1/\sqrt{KT})$ where $K$ is the number of clients, $T$ is the number of iterations. Obviously, this convergence rate matches that of standard FedAvg for nonconvex problems.
**5**. Directly optimizing AUC loss is a powerful approach to address the imbalanced data classification issue. It has attracted a lot of attention in the past few years, e.g., [6,35].
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. Most concerns are addressed but my concern about client sampling remains. The estimation of inner-level function may be affected by client sampling. I suggest that the authors add experiments to it.
---
Reply to Comment 1.1.1:
Comment: Thanks for the reviewer's comments.
**Q1**. …I suggest that the authors add experiments to it.
**A1**. We would like to emphasize that our experiments focus on the **cross-silo federated learning**, which has been widely used in many applications, such as the biomedical image classification used in our experiment. **For cross-silo federated learning, it is NOT necessary to do client sampling**. In particular, under the cross-silo federated learning setting, there are not a huge number of participants. It is preferable to include all participants to participate in the computation since there does not exist a lot of training data, e.g., biomedical data. Thus, it is not necessary to do client sampling.
Nevertheless, we have conducted an experiment about client sampling. In this additional experiment, there are four workers. We use STL10 dataset and the communication period is set to 4. To simulate the client sampling scenario, we randomly select three workers to participate in the computation. The following table shows the test AUC score of our algorithm with/without client sampling. It can be observed that LocalSCGDAM-with-sampling has a similar performance to LocalSCGDAM-without-sampling.
| | LocalSCGDAM-with-sampling | LocalSCGDAM-without-sampling |
|-----|---------------------------|------------------------------|
| AUC | 0.813 | 0.820 | | null | null | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
PPi: Pretraining Brain Signal Model for Patient-independent Seizure Detection | Accept (poster) | Summary: The manuscript presents an innovative model called PPi (Pretraining-based model for Patient-independent seizure detection) for patient-independent seizure detection utilizing SEEG data. SEEG provides detailed and three-dimensional brainwave information which is advantageous for seizure detection. However, challenges emerge in modelling SEEG data due to the substantial domain shift between different patients and the evolving patterns among various brain areas.
To tackle these challenges, the authors introduce two novel self-supervised tasks during the pretraining phase. These tasks extract comprehensive information from the available SEEG data while preserving the unique characteristics of brain signals recorded from different brain areas. They also propose two techniques, channel background subtraction and brain region enhancement, to effectively address the domain shift problem between patients.
The performance of the PPi model is substantiated through extensive experiments conducted on two public datasets and a real-world clinical dataset collected by the authors. The experiments demonstrate that the PPi model outperforms state-of-the-art baselines, showcasing the practicality and effectiveness of the model. The authors also provide a visualization analysis that further validates the rationality of the two proposed domain generalization techniques.
The contributions of this work encompass the application of patient-independent seizure detection on a large-scale real-world clinical SEEG dataset under clinical requirements. This achievement highlights the innovation of the self-supervised pretraining tasks and the domain generalization techniques that the authors propose to manage the domain shift between patients. Furthermore, the superior performance of the PPi model is demonstrated through extensive experiments with other state-of-the-art methods, thereby emphasizing the significant application value of the work.
Strengths: 1. Figure 1. Figure 3. and Figure 4. are not only clear but also self-explanatory, demonstrating a well-designed visual depiction of the proposed method. This effectively simplifies the understanding of the framework architecture and the proposed method's workflow, serving as a useful tool for readers to follow the discussion and analysis in the manuscript.
2. The paper included extensive experimentation, which covers three different datasets. Their rigorous approach in testing the proposed method against a broad range of state-of-the-art methods provides a robust and comprehensive evaluation of its performance. Such extensive evaluation contributes to the manuscript's credibility and confirms the model's generalizability and applicability to various data contexts.
3. The performance of the PPi model, as reported in the results, is very promising. The superior performance of PPi when compared to state-of-the-art methods in seizure detection underlines the efficacy of the proposed method and its potential contribution to the field. It further exemplifies the practical significance and potential of the PPi model in real-world applications, especially in clinical scenarios.
4. The authors' inclusion of an ablation study effectively demonstrates the impact and contribution of different components of the proposed method towards the overall performance. This provides clear insights into the proposed approach's effectiveness.
Weaknesses: 1. In Section 4.1.2, the authors stated that their proposed self-supervised learning (SSL) task exhibits a robust capability for extracting time-domain features. However, the basis for this assertion is unclear, and the inclusion of supporting references would strengthen the argument. Furthermore, the paper does not explain why the SSL tasks are applied solely to the time domain and not the frequency domain. Providing a more detailed discussion regarding these decisions would improve the narrative and understanding of the method.
2. The clinical dataset described in Section 5.1 is highly imbalanced, yet the paper does not adequately assess how the model handles class imbalance. If the authors believe that the proposed SSL tasks can inherently address the data imbalance issue, then a more detailed analysis and validation of this claim would be beneficial.
3. The authors mention the importance of preserving unique brain area characteristics for effective seizure detection across different lesions in the introduction section. However, it remains unclear how other individual differences are accounted for in their method, beyond the variance in seizure locations. Furthermore, there is a lack of qualitative analysis supporting how well the proposed model preserves the characteristics of different brain areas across channels, which is crucial to corroborate the authors' claims.
4. Figure 5, in its current state, is relatively small and not entirely intuitive. The t-SNE plots, despite the authors mentioning that the t-SNE plots were only split after the dimensionality reduction, the plots do not clearly show that they are derived from the same calculation. I recommend reducing the total data points included in the plot, plotting both of them on the same t-SNE graph with 4 distinct colours instead of splitting them. In addition, add another set of t-SNE plots of two channels that were from different brain regions from two different patients as a comparison to demonstrate the unique characteristics of different brain regions are kept within the latent features.
5. The ablation study should also include an evaluation of the reconstruction loss term and the self-attention layers' significance. Their roles and impacts on the model's performance should be more explicitly assessed to provide a more comprehensive understanding of the model's architecture and its component's contribution.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. In Section 5.4's Ablation study, the performance of the PPi-SSL model exhibits some inconsistencies. While it outperforms PPi-SSL2 on the FNUSA dataset, it shows considerably reduced effectiveness on the Clinical dataset. Could the authors expand on the potential reasons for these discrepancies? For instance, could the model have collapsed during training in the absence of reconstruction loss? Or might the performance variation be attributed to the differing class distributions within the datasets?
2. Similarly in Section 5.4's Ablation study, it remains unclear why the condition of PPi-brain region isn't applicable to the two public datasets. Is it because the brain region enhancement cannot be used at all for the two public datasets? A clarification on this matter from the authors would be appreciated.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: please refer to the weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the insightful comments. Responses to specific comments are listed below.
* **W1: Add supporting references to strengthen the argument for SSL tasks.**
Thanks for the suggestions. [1], [3], [4] are some works which also apply SSL to extract time-domain features. [2] stated that the performance gap between balanced and imbalanced pre-training with SSL is significantly smaller than the gap with supervised learning, which means the SSL is more robust in our scenario.
Our SSL tasks are applied solely to the time domain because the designed tasks are mainly to capture the change of the signal over time. Especially in context swapping, this task is to enhance the coherence semantic uniqueness of contextual information, which is based on time domain information.
We will add the above discussions to our manuscript.
* **W2: Add supporting references to explain why SSL can be robust to dataset imbalance.**
Thanks for the suggestions. There are some works which claim that SSL can address the data imbalance issue. For example, [1] \(SIGKDD 2023\) adopts SSL pre-training to alleviate the data imbalance. [2] (ICLR 2022 spotlight) systematically investigate SSL under dataset imbalance and find that SSL is more robust to dataset imbalance. We will add the above works to the manuscript to clarify this claim.
* **W3&W4: Motivations for preserving unique brain area characteristics and additional visualization analysis as supporting.**
Thanks for the concerns and suggestions. Our response is consists of two parts.
(1) Since the similarity in structure and function of the same brain region among patients, the representations of the same brain regions can be aligned (illustrated by the case study). Thus, we only consider preserving the unique characteristics among different brain regions rather than other individuals differences.
(2) We conducted visualization analysis with 7 more examples (shown in Fig. 2 and Fig. 3 from the pdf file in global response). As you suggested, we reduce the number of points included in the plot to a fifth of the original and plot both figures of two patients on the same t-SNE graph with 4 distinct colours.
In Fig. 2, 6 more examples were included. In each of them, the representations are from the same brain region of two different patients.
In Fig. 3, we add another example where the representations are from different brain regions of two different patients. Compared with other plots, the distributions in this plot are not totally aligned after the channel background subtraction, indicating the unique characteristics of different brain regions are preserved within the latent features.
* **W5: Ablation study for reconstruction and self-attention layer.**
Thanks for the suggestion. The ablation study of the reconstruction loss term and the self-attention layers' significance is shown in Table 1 in the global response. We will add the results to the original ablation study to our revised paper, including the settings, results and corresponding analysis.
* **Q1: Discussions about exhibiting performance variations on different datasets (explored by additional experiments).**
Thanks for bringing up this point. Actually on the FNUSA dataset, PPi-SSL obtains 47.29% on F2 score, which is slightly lower than PPi-SSL2 (48.42%). On MAYO and clinical dataset, the performance of PPi-SSL shows a considerably decrease compared with PPi-SSL2. In order to explore such discrepancies, we reconducted these two sets of experiments and obtained similar results:
| | MAYO | | | | FNUSA | | | | Clinical | | | |
| -------- | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |
| | Pre. | Rec. | F1 | F2 | Pre. | Rec. | F1 | F2 | Pre. | Rec. | F1 | F2 |
| PPi-SSL2 | 41.27+/-5.59 | 39.10+/-5.43 | 36.02+/-4.97 | 36.96+/-5.13 | 50.45+/-7.45 | 49.43+/-6.86 | 47.25+/-2.80 | 48.78+/-4.49 | 18.94+/-4.82 | 47.21+/-6.87 | 16.85+/-3.02 | 24.15+/-3.16 |
| PPi-SSL | 33.23+/-8.44 | 24.02+/-5.12 | 26.12+/-5.83 | 24.47+/-5.24 | 74.20+/-7.96 | 40.11+/-5.78 | 52.56+/-4.93 | 45.36+/-5.35 | 9.87+/-2.14 | 25.32+/-4.65 | 12.74+/-2.66 | 16.19+/-2.99 |
So we think the performance variation might be attributed to the differing class distributions within the datasets.
* **Q2: Clarification about brain region enhancement task for the two public datasets.**
In the manuscript, line 296: "For the ablation experiments on two public datasets, brain region enhancement is not applied due to lack of information about brain regions."
As the brain region enhancement needs the brain region labels and these two public datasets do not contain brain region labels, brain region enhancement cannot be applied to the two public datasets. We will reorganize our description in the text to make it more clear: "Due to the requirement of brain region labeling in brain region enhancement, the lack of such labels in the two public datasets makes brain region enhancement inapplicable to the public datasets".
References:
[1]Chen J, Yang Y, Yu T, et al. Brainnet: Epileptic wave detection from seeg with hierarchical graph diffusion learning[C]. 2022.
[2]Liu H, HaoChen J Z, Gaidon A, et al. Self-supervised learning is more robust to dataset imbalance[J]. 2021.
[3]Shao Z, Zhang Z, Wang F, et al. Pre-training enhanced spatial-temporal graph neural network for multivariate time series forecasting[C]. 2022.
[4]Nie Y, Nguyen N H, Sinthong P, et al. A time series is worth 64 words: Long-term forecasting with transformers[J]. 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my comments.
I have increased the score
---
Reply to Comment 1.1.1:
Title: Thanks for the reviews
Comment: We truly appreciate your effort in improving our paper and recognition of our work. | Summary: This paper proposes a model called PPi that pre-trains on SEEG data using two self-supervised tasks, followed by a channel background subtraction step as well as a brain region enhancement task for patient-independent seizure detection. Experiments on two public datasets and an internal dataset suggest that PPi outperforms existing models for seizure detection. Visualization of latent representations using a t-SNE plot indicate the effectiveness of the channel background subtraction step.
Strengths: 1. There is some originality in the design of the pre-training tasks and the channel background subtraction step.
2. Overall the methods are sound and the paper is relatively easy to understand.
3. The proposed model PPi shows improved performance over a set of baselines.
Weaknesses: 1. Several design choices in Methods need more justifications (see my questions below).
2. All the datasets are from a small number of patients, which could be a limitation and should be discussed.
3. Some notations and equations in Methods are not necessary, and can be better explained using plain texts. Overloaded notations make it harder to understand. For instance, Definition 1 is not necessary.
4. While PPi outperforms baselines, the overall performance of PPi is still low, particularly on the clinical dataset, which could limit its clinical applicability.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Some notations and equations in Methods are not necessary (e.g., Definition 1). Replacing some notations with plain language can help readability.
2. In the pre-training phase, how to sample contexts if the target segment is in the beginning or end of the signal? Please clarify.
3. More justifications for design choices are needed in Methods. For example, in “Channel discrimination” section, why is the difference vector used instead of a concatenated vector as done in “context swapping”? Why and how is self-attention used to aggregate frequency and time-domain representations?
4. Reconstruction loss is included in the channel discrimination pre-training task. Please include an ablation to show the impact of reconstruction loss.
5. In Section 5.4, the authors state that “for the ablation experiments on two public datasets, brain region enhancement is not applied due to lack of information about brain regions.” Does this mean that the results in Table 1 do not include brain region enhancement task for the two public datasets? Please clarify.
6. For the visualization analysis in Section 5.5, it would be good to show other patient examples to make sure that the observation is consistent across patients (this should be doable given that there is only 7 patients in the clinical dataset).
7. In my opinion, it’s a limitation that the data come from a small number of patients. This limitation should be discussed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Some limitations are discussed in the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the insightful comments. Responses to specific comments are listed below.
* **Q1: Replace some notations with plain language in the manuscript.**
Thank you for this good suggestion. The plain language version for Definition 1 is in line118-119: "Our goal is to utilize the data of labeled patients (source domains) to train a model which can be directly adopted to the data of unseen patients (target domains)." We will remove Definition1 for better readability.
In line 160: "where the sampling probability $P(c1= c2) = P(c1\neq c2) = 0.5$", we will change with "where the two sequences are sampled from the same or different channels with equal probability."
* **Q2: The solution of sampling contexts for the beginning or end of the signal.**
In the pre-training phase, the data scale is large, so we just simply ignore the beginning and the end of the signal. We apologize for not clearly clarifying this and we will add the clarification to the main manuscript.
* **Q3: Clarification for "difference vector" used in "channel discrimination" and ablation for aggregation strategy.**
In channel discrimination, since the target is to compare the difference between two signals, we consider to use a elementwise absolute difference. This choice is motivated from [1], in which they do a relative positioning (RP) task. In RP, they train the model to dicriminate whether the given two time windows are far or close to each other. They use the elementwise absolute difference to combine the two vectors. Since RP and channel discrimination both involve the comparison of the difference between the given two samples, we think a elementwise absolute difference may be more suitable.
While context swapping does not involve a direct comparison of the difference between the given two samples, but more focus on the coherence semantic uniqueness of contextual information. As a result, we think a concatenated vector is more suitable for context swapping.
In the aggregation strategy, by assigning weights to representations in the time and frequency domains through the attention mechanism, the model can adaptively choose between the two representations. In the implementation, we feed the time domain representation and frequency domain representation into a self-attention layer, and then apply a mean pooling to the output of this layer. To verify the effectiveness of self-attention experimentally, we add an addtional ablation experiment on self-attention (result is shown in Table 1 in the global response).
* **Q4: Ablation for reconstruction loss.**
Thanks for the good advice. The ablation with the reconstrcution loss is shown in Table 1 in the global response. We will add this result along with the ablation for self-attention aggregation to the original ablation study to our revised paper, including the settings, results and corresponding analysis.
* **Q5: Clarification about brain region enhancement task for the two public datasets.**
Yes, the results of the two public datasets in Table 1 do not include brain region enhancement task. As the brain region enhancement needs the brain region labels and these two public datasets do not contain brain region labels, brain region enhancement cannot be applied to the two public datasets.
* **Q6: Additional experiments for case study.**
Thank you for this good suggestion. We conduct another 6 groups of visualization analysis and show the results in Fig. 2 from the pdf file in global response. The patients we chose cover all the 7 patients in the clinical dataset. In the new visualization analysis, we made the following improvements to make the results more clear (suggestions for improvement come from reviewer DeFx):
(1) We reduce the number of points included in the plot to a fifth of the original by random sampling.
(2) We plot the representations of two patients on the same t-SNE graph with 4 distinct colours instead of splitting them.
Overall, in each group of experiments, channel background subtraction shows a similar effect, indicating that the effect of channel background subtraction is consistent across patients.
* **Q7: Discussions about subject numbers.**
Thanks for pointing this out. The number of patients in our datasets (7 in clinical dataset, 13 in MAYO and 18 in FNUSA) are relatively small compared to other fields (e.g. EEG). This is a good suggestion and we will discuss this limitation in our manuscript.
Actually SEEG is an emerging field, and the conditions for obtaining data are very strict as it requires craniotomy surgery for electrode implantation and long-term data recording. Some other works in this field also contains a limited number of patients (e.g., [2] contains 10 patients, [3] contains 10 patients). We are working hard to communicate with hospitals, hoping to release more data. Strive to alleviate the problem of insufficient data in the field of SEEG.
References:
[1]Hubert Banville, Omar Chehab, Aapo Hyv.rinen, Denis-Alexander Engemann, and Alexandre Gramfort. Uncovering the structure of clinical eeg signals with self-supervised learning. Journal of Neural Engineering, 18:046020, 2021.
[2]Chen J, Yang Y, Yu T, et al. Brainnet: Epileptic wave detection from seeg with hierarchical graph diffusion learning[C]//Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022: 2741-2751.
[3]Wang C, *et al.*, "BrainBERT: Self-supervised representation learning for intracranial recordings." *International Conference on Learning Representations, 2023*.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my comments. I have now increased the score.
---
Reply to Comment 1.1.1:
Title: Thanks for the reviews
Comment: We are truly grateful for the reviewer’s feedback and recognition of our efforts. | Summary: Seizure detection using stereoencephalographic data is crucial for epilepsy diagnosis. Owing to the large variety of seizure patterns and pathology, automated seizure detection is quite challenging, and manual annotation of data remains necessary. This study proposes a model to detect seizures from SEEG data in a patient-independent manner (PPi). The proposed model is based on self-supervised learning tasks. To handle the frequency domain shift between patients, they proposed background channel subtraction and brain region enhancement techniques.
The model presented in this article seems to outperform existing models, especially on clinical data. This article presents a model that could potentially help clinicians detect seizures.
Strengths: This study attempts to solve a common issue in epilepsy diagnosis, but this could also be helpful in annotating large SEEG datasets. The presentation of the results is clear, and all the results appear to be presented. The main strength of the proposed model is its capacity to be applicable to all types of patients by dealing with the huge domain shift between patients. The two techniques of channel background subtraction and brain region enhancement seem to be highly innovative and could potentially be applied to other datasets to answer other questions. The model performance appears to be very reasonable, especially for clinical data.
One of the main strengths of this study is the use of a self-supervised learning method that allows non-annotated data to be obtained, particularly when working on such rare datasets.
Weaknesses: One of the main issues in this study is that they do not show raw traces of data. It could have been good that we see some of the automatically detected seizures to see whether the well-detected seizures are the easiest to detect or if some very complex seizures are also detected. The main problem with epileptic seizures is that their patterns differ depending on the pathology and brain region. Some seizures, such as low-voltage fast-activity seizures, could potentially be very hard to detect by a model, at least the real start of the seizures.
They could be precise in the article if, for the seizures detected, there were many manual adjustments to do afterwards to correctly place the start of the seizure.
The performance of the model is correct based on other existing models; however, the presented results are not sufficient to fully rely on automatic detection. As they said in the discussion, this could be helpful for clinicians, but it cannot replace manual annotation at the moment.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What preprocessing steps did you apply to the dataset ? However, this was not clear in the article.
Did you process any artifact rejection before training ?
Can you please show the raw traces of EEG for well-detected and undetected seizures ? Are there some patterns that the model is unable to detect ?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors seems to correctly adress the limitations of the article
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the insightful comments. Responses to specific comments are listed below.
* **Q1&Q2: Preprocessing steps for the datasets.**
For the public datasets, we first remove the power line noise and then dowm sample the data to 500Hz. For the clinical dataset, we remove the powerline noise, down sample the data to 250Hz and apply a normalization to each channel by the equation $\frac{x-\mu}{\sigma}$, where $\mu, \sigma$ are the mean and standard deviation of the channel.
For artifact rejection, we remove the power line noise in the public datasets and clinical dataset before training.
We apologize for not clearly mentioning the preprocessing steps and will add the above description to our manuscript.
* **Q3: Examples of SEEG signals of well-detected and undetected seizures by the model.**
Of course, we pick up the raw traces of well-detected and undetected seizures which are shown in Fig. 1 in the pdf file in global response. In Fig. 1, we show two examples. In the examples, we highlight the normal signals (when the patient is in the absence of epilepsy) with yellow, the well detected seizures with red, and the undetected seizures with blue.
In most cases, those more pronounced seizure signals (with more violent fluctuations, highlighted in red) were easier for the model to identify. However, some seizure signals are very similar to their nearby normal signals (highlighted in blue), making it difficult for the model to identify such signals.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer and the clarifications.
---
Reply to Comment 1.1.1:
Title: Thanks for the reviews
Comment: We genuinely appreciate your valuable reviews and recognition of our work. | Summary: This article deals with domain shifts in seizure detection with stereoelectroencephalography (SEEG), an emerging acquisition method in this field. To tackle this problem, the authors propose two different self-supervised tasks to learn meaningful features from the SEEG and propose also two preprocessing techniques to reduce the shift between the domain. All this propositions allow to beat the SOTA methods on two public datasets and also on their own clinical dataset.
Strengths: This paper proposes a new PPi method to be applied to an emerging field SEEG. PPi shows very good results compare to what the SOTA proposes. Moreover, the research is done in coordination with doctors, so the paper deals with real-world clinical issues.
The method is very clear.
The paper implements very nice experiments, allowing us to see the benefits of the method and the effects on the data.
The comparison is made over several other methods.
In addition, a new clinical dataset was collected for this paper. SEEG is a new field where every dataset is crucial to improve the impact in the science community.
Weaknesses: In the experimental part, the authors' claims do not link to the performance in Table 1. For example, they claim 54.93% on F2 score of improvement on the clinical dataset, but the average performance is only 35.51% in Table 1.
The PPi method is a concatenation of several components (SSL, frequency spectral features, brain region ...). Even with the ablation study is hard to understand what is the improvement of each component. The last experiment proposes visualizing the effect of the channels' background subtraction, but it is the only visible effect. Each other component does not seem to improve the results much except when all the components are combined.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The paper claims a 54% improvement in the F2 score, but it seems that Table 1 gives other results. 35,51% of F2 score is only a 17.40% improvement if we compare it to the best baseline results (i.e. MiniRocket with VREx). From where does this claim come from?
In Table. 2 for the ablation study, It is hard to understand which layer is removed. For example, PPi-power means the all PPi framework without feature and spectral power, so only encoding features? same question for PPi-SSL. With this experiment, it seems that none of the layers are useful alone to improve performance. For each row of the tables, the performances are below the baseline. is PPi useful only when all the components are taken?
Proposing a new clinical dataset can be game-changing in such an emerging field as SEEG. Will the dataset be publicly available too?
The score on the clinical dataset seems worse than on the public dataset. Do you think the score can be improved? Does 35% of F2 score enough to help the doctor to detect seizures in the brain?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No limitation
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive and detailed comments. Responses to specific comments are listed below.
* **Q1: Calculation method of performance improvement value in the manuscript.**
We apologize that the calculation of the improvement may not be clearly stated in the manuscript. Actually, the improvement claimed in our manuscript is a relative improvement which is also utilized in [1]. Specifically, suppose the performances are $a$% and $b$% (a>b), then the relative improvement is calculated as $\frac{a-b}{b}\times 100$%. We will add a clear description in the main manuscript.
* **Q2: Detailed explanation and discussions about ablation study.**
In the ablation study, PPi-SSL1, PPi-SSL2, PPI-SSL are included to demonstrate the effectiveness of our self-supervised tasks. PPi-SSL1 means in the pre-training stage, we only do the context swapping task. Likewise, PPi-SSL2 means in the pre-training stage, we only do the channel discrimination task. **PPi-SSL means the encoder is not pre-trained by the self-supervised tasks and directly trained in the downstream seizure detection.** PPi-power is included to demonstrate the effectiveness of the features from frequency domain. **PPi-power means we only use the encoder pre-trained with the self-supervised tasks to encode the data to representations, but do not use the power spectral density.**
Since each component is carefully designed, if any part is missing, the performance of the model will drop a lot compared to the complete version, resulting in a lower than the best baseline. It also reflects that our model is not simply a patchwork of individual components, but that these components work together.
* **Q3: Plans about releasing our clinical dataset.**
Thank you for providing this good advice. Releasing such a clinical dataset involves highly sensitive intracranial data and personal privacy concerns and we are working hard to make the dataset publicly available.
It is very difficult to make this dataset all public in a short time. Therefore, we plan to promote the release of our dataset in the following two steps:
* Step1: We will engage in proactive communication with the hospital and strive to make a subset of raw data available within a half year. Similar to the procedures outlined in our manuscript, these data releases will also adhere to ethical review mandates.
* Step2: In the future, we will examine the feasibility of releasing the complete dataset once it receives the necessary ethical review approval. This will enable researchers to utilize the large-scale dataset for further research purposes.
* **Q4: Discussions about the potential improvement.**
In addition to the model, the dataset itself will also have a great impact on the score (such as the ratio of positive and negative samples, data noise, label noise, etc.). So in the experiments, the scores on clinical dataset are lower than those on public datasets. We believe that scores can be improved mainly in two ways. The first is optimizing the model (such as model design, model training) to get a higher performance. The second is to improve the quality of the dataset (e.g., clean the data carefully before training). We will continue to explore for better performance in the future.
In the main manuscript (limitations and future works), we stated that the predicted results of our model are mainly serve as a reference to assist doctors to achieve more efficient clinical diagnosis and treatment, rather than completely replace doctors in seizure detection. Additionally, during our research, we engaged in discussions with doctors who expressed their positive response towards our model’s performance that aids in the diagnosis of epilepsy at the current stage.
References:
[1]Chen J, Yang Y, Yu T, et al. Brainnet: Epileptic wave detection from seeg with hierarchical graph diffusion learning[C]//Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022: 2741-2751.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the answer of my questions and the clarifaction.
---
Reply to Comment 1.1.1:
Title: Thanks for the reviews
Comment: We are grateful for your valuable reviews and recognition of our work. | Rebuttal 1:
Rebuttal: ### Global response
We thank the reviewers for their close read of this manuscript and their insightful comments.
In response to reviewers' comments, we additionally performed 2 sets of experiments in the ablation study and 7 sets of experiments in the case study. Several important suggestions were made and we have considered each carefully and revised accordingly. Please find our replies to the reviewers below for detailed responses to the reviewers’ comments. We hope these updates address all key concerns and clarify the significance of our work.
**Note**
* In the global response, we **upload a pdf file**, which includes our additional experiments for the case study and two examples of SEEG signals with detection results.
* We conducted 2 additional experiments in ablation study on the reconstrcution loss and the self-attention. The results are shown in Table 1.
* PPi-reconstruction: Pre-training w/o reconstrcution loss.
* PPi-self attention: Aggregate the representations from time and frequency domains by a simple mean pooling w/o self-attention layer.
Table 1: The ablation study on the reconstrcution loss and the self-attention.
| | MAYO | | | | FNUSA | | | | Clinical | | | |
| ------------------ | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- |
| | Pre. | Rec. | F1 | F2 | Pre. | Rec. | F1 | F2 | Pre. | Rec. | F1 | F2 |
| PPi-reconstruction | 44.01+/-11.56 | 50.29+/-10.39 | 34.15+/-8.26 | 36.89+/-8.75 | 60.78+/-11.33 | 62.31+/-9.79 | 59.48+/-7.9 | 60.02+/-6.86 | 22.04+/-8.38 | 44.84+/-10.93 | 21.23+/-6.41 | 25.01+/-7.96 |
| PPi-self attention | 48.82+/-5.10 | 60.2+/-3.07 | 51.15+/-4.64 | 55.41+/-4.43 | 65.14+/-6.01 | 66.91+/-2.95 | 62.17+/-3.93 | 63.98+/-3.42 | 28.67+/-5.61 | 46.98+/-5.46 | 29.56+/-3.59 | 32.59+/-2.68 |
| PPi | **49.85+/-6.93** | **69.67+/-2.82** | **54.35+/-4.72** | **61.07+/-4.69** | **71.73+/-4.06** | **70.81+/-2.14** | **70.61+/-2.82** | **70.55+/-2.28** | **29.76+/-5.45** | **47.59+/-5.16** | **30.92+/-3.45** | **35.51+/-2.35** |
We will add these two sets of experimental results to the original ablation study in our manuscript.
Thanks again for the thoughtful commentary. We have put in considerable effort to improve our manuscript, and we sincerely hope you will find our responses informative and helpful.
Pdf: /pdf/a9fdfe0363298ff657d8ccde3f84ea82cddbea67.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a patient-independent seizure detection framework called PPi for stereoelectroencephalography (SEEG) data. It utilizes self-supervised learning for taking into account discriminability of brain areas and contextual coherence of SEEG signals to preserve the patterns of different channels. The authors propose to use channel background subtraction for distributional alignment of brain regions across patients and handle inter-patient domain shift. Evaluation is performed on two public and one clinical data-set against several baseline methods.
Strengths: 1. Evaluation is performed on three separate datasets against several baseline methods.
2. Performance tables reported appear to consistently suggest that PPi provides improvements against the selected baselines for seizure detection.
3. The ablation studies support the need for different components that the model combines.
Weaknesses: 1. The presentation of the main methodology reads as a piecing together of multiple existing modules making the novelty of the proposed method unclear for this application.
2. The tables do not report standard deviation measures for any of the methods or discussions of statistical significance measures, making it hard to gauge the robustness of the method to the data splits.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. The channel distribution alignment visualization in Fig. 5 does not appear to result in a clear separation between the epileptic and controls subjects. Could the authors provide more intuition on why the representation leads to enhanced classification performance between the two classes?
2. A suggestion would be to include a shortened version of the explanation Section E: Brain Region Enhancement Details from the appendix in the main text of the work to improve readability and aid readers in understanding the multi-classification setup.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have identified the limitations of their work in a separate subsection within the conclusion and have indicated potential for clinical deployment. It would be great if the authors could provide more information regarding the clinical targets (eg. detection accuracy) to be met for translation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the insightful comments. Responses to specific comments are listed below.
* **W1: Clarification for the unclearness of novelty raised by presentation.**
Thanks for pointing this out. The novelty of the proposed method are listed below:
* Our method contains two novel self-supervised pretraining tasks (i.e., channel discrimination and context swapping). Different from other SSL studies, the motivation of our designed tasks is to preserve the unique patterns of each channel, which is more consistent with the physiological mechanism of seizures.
* We propose two techniques including channel background subtraction and brain region enhancement in our method to handle the domain shift between different patients. Different from other domain generalization works (e.g. [1], [2], [3] ), we fully consider the characteristics of SEEG in our design, so that we can achieve better performance in this scenario.
We will incorporate the above content into the methodology section to enhance the clarity of the novelty in the proposed method.
* **W2: Report standard deviation of the experiments.**
We are sorry for not reporting the standard deviation of the experiments. Due to the limitation of the reply characters, we report the results with standard deviation of our model and several well-performed baselines as follows:
| | MAYO | | | | FNUSA | | | | Clinical | | | |
| --------------- | ------------ | ------------- | ------------ | ------------ | ------------ | ------------- | ------------ | ------------ | ------------ | ------------- | ------------ | ------------ |
| | Pre. | Rec. | F1 | F2 | Pre. | Rec. | F1 | F2 | Pre. | Rec. | F1 | F2 |
| TCN+CDANN | 16.13+/-5.80 | 68.14+/-4.17 | 24.74+/-7.67 | 37.84+/-8.74 | 32.71+/-6.42 | 69.62+/-2.77 | 43.09+/-5.61 | 54.88+/-3.58 | 1.42+/-0.51 | 44.20+/-13.85 | 2.72+/-0.98 | 6.08+/-2.21 |
| TCN+MLDG | 18.88+/-4.22 | 61.38+/-4.65 | 27.96+/-4.92 | 40.39+/-4.78 | 32.81+/-6.61 | 56.81+/-2.41 | 40.41+/-5.43 | 48.24+/-3.71 | 28.85+/-9.34 | 6.68+/-3.92 | 2.55+/-0.58 | 2.58+/-0.89 |
| MiniRocket+MTL | 21.67+/-7.15 | 46.11+/-12.08 | 27.72+/-7.38 | 35.03+/-7.90 | 56.85+/-5.25 | 59.71+/-10.71 | 56.93+/-6.94 | 58.28+/-9.00 | 12.57+/-9.03 | 45.79+/-15.03 | 4.02+/-3.79 | 5.15+/-2.87 |
| MiniRocket+VREx | 38.63+/-7.60 | 33.92+/-8.63 | 33.37+/-6.66 | 32.88+/-7.15 | 65.23+/-9.83 | 55.20+/-9.33 | 54.34+/-5.36 | 53.91+/-7.58 | 7.47+/-4.73 | 44.50+/-10.18 | 11.37+/-6.68 | 17.11+/-8.94 |
| SEEG-Net | 45.41+/-9.96 | 45.62+/-9.56 | 43.54+/-8.84 | 44.22+/-8.98 | 69.39+/-9.23 | 53.75+/-7.62 | 60.02+/-8.05 | 55.99+/-7.73 | 20.06+/-5.56 | 32.81+/-8.50 | 20.82+/-5.70 | 22.92+/-5.96 |
| PPi | 49.85+/-6.93 | 69.67+/-2.82 | 54.35+/-4.72 | 61.07+/-4.69 | 71.73+/-4.06 | 70.81+/-2.14 | 70.61+/-2.82 | 70.55+/-2.28 | 29.76+/-5.45 | 47.59+/-5.16 | 30.92+/-3.45 | 35.51+/-2.35 |
We will **update all the experiment results** in the manuscript with standard deviation in the final version.
* **Q1: Clarification for Fig.5 in case study with examples.**
Thanks for pointing this out. In the field of SEEG for seizure detection, there will be some ambiguous samples in the labeling process of epilepsy. Not even human experts can be 100% sure about the labels of these samples, thus there will be some discrepancies in the results marked by different annotators in the labeling process. It is also difficult for the model to handle these samples. For exmaple, in Fig. 1 from the pdf file in global response, the signals highlighted with blue are very similar with the normal signals (highlighted with yellow) but were labeled as seizure. Likewise, there will also be some signals that look like seizures but are labeled as normal.
Due to the above reasons, in our scenario, it is almost impossible to result in a very clear seperation between seizure and normal samples. In Fig. 5 of the manuscript, the seizure samples are clustered at the edges, enabling the classifier to distinguish most of the samples.
* **Q2: Add a shortend version of Section E in the appendix to the main text.**
Thank you for the good suggestion. We will add a new section (Section 5.6) in the manuscript which includes one of the figures from Fig. 7 to Fig. 13 in Appendix and the following paragraph:
"In order to demonstrate the effectiveness of brain region enhancement, we calculate the confusion matrix of the multi-classification. Fig. 6 shows the confution matrix from one of the patients (the confusion matrix of all the patients are shown in Appendix E), in which the vertical axis represents the multi-class label $y_{c,k}'$ and the horizontal axis represents the multi-class prediction result $\hat{y}_{c,k}'$. We can see that in the confusion matrix, most samples are distributed on the main diagonal, which reflects the good performance of the multi-classification task, illustrating the effectiveness of brain region enhancement."
References:
[1]Yiping Wang, Yanfeng Yang, Gongpeng Cao, Jinjie Guo, Penghu Wei, Tao Feng, Yang Dai, Jinguo Huang, Guixia Kang, and Guoguang Zhao. Seeg-net: An explainable and deep learning based cross-subject pathological activity detection method for drug-resistant epilepsy. Computers in Biology and Medicine, page 105703, 2022.
[2]Daehee Kim, Youngjun Yoo, Seunghyun Park, Jinkyu Kim, and Jaekoo Lee. Selfreg: Self-supervised contrastive regularization for domain generalization. In ICCV, pages 9619–9628, 2021.
[3]David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In ICML, pages 5815–5826. PMLR, 2021.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: Thank you for responding to my review comments. Based on the rebuttal, I have increased my score by a point.
---
Reply to Comment 1.1.1:
Title: Thanks for the reviews
Comment: We sincerely appreciate the reviewer’s effort in helping us to enhance the paper and recognition of our work. | Summary: This paper presents a pretraining-based model for patient-independent seizure detection (PPi) on SEEG data in the clinical scenario. The proposed method adopts a self-supervised pretraining strategy to extract information from SEEG signals while preserving the unique characteristics of each channel, and applies channel background subtraction and brain region enhancement techniques to improve the generalization ability of PPi. The experimental results carried out on two public datasets and a real clinical dataset have shown that the proposed method outperforms the SOTA baselines.
Strengths: - A pretraining-based model for patient-independent seizure detection (PPi) using SEEG signals.
- The proposed method adopts two self-supervised pretraining strategies (channel discrimination and context swapping) to extract information from SEEG signals while preserving the unique characteristics of each channel, and applies channel background subtraction and brain region enhancement techniques to effectively tackle the domain shift problem and thus improve the generalization ability of PPi.
Weaknesses: - The use of PSD-based features which is widely applied in the literature to extract features from the frequency domain of the EEG/SEEG signals.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Various types of EEG/SEEG features have been proposed in the literature. It is suggested to highlight the effectiveness of PSD-based features in the proposed method.
2) Update the references by considering some recent and relevant studies.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, but briefly. It is suggested to discuss more this point.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the insightful comments. Responses to specific comments are listed below.
* **Q1: Highlight the effectiveness of PSD-based features in the proposed method.**
Thank you for the good suggestion. There are some related works which support the effectiveness of PSD-based method in seizure deteciton. For example, [1] stated that the spectral power of brain signal has the ability to track the transient changes before and during seizure. [2] argued that the spectral power in certain sub-bands of the SEEG, specifically in higher frequency sub-bands, may play a key role in seizure prediction.
We will add the above works in the manuscript to highlight the effectiveness of PSD-based features in our proposed method.
* **Q2: Update the references.**
Thank you for the good suggestion. We will update some of the references to more recent studies. The updates are as follows:
- In line 31 of the manuscript, sentence "After collecting the SEEG recordings of patients, the process of epilepsy detection and diagnosis is traditionally treated as a manual task that highly depends on a few experienced neuroscientists, requiring considerable time and human resources". We will replace the original reference of this sentence by [5].
- In line 38 of the manuscript, sentence "However, existing works for SEEG-based seizure detection mainly focus on the patient-specific setting". We will replace the original reference of this sentence by [3].
* In line 82 of the manuscript, sentence "Ayoubian et al. employ wavelet decomposition, feature extraction, adaptive thresholding and artifact removal in SEEG data". We will replace the sentence by "Truong et al. [6] propose Integer-Net to conduct seizure detection on both EEG and SEEG".
* In line 101 of the manuscript, we will add a sentence "Zhang et al. [4] regularize the discrepancy between closely-related domains to achieve domain generalization".
References:
[1]M. Bandarabadi, C. A. Teixeira, J. Rasekhi, and A. Dourado, “Epileptic seizure prediction using relative spectral power features,” Clin Neurophysiol., 2014.
[2]Netoff T, Yun P, Parhi K. Seizure prediction using cost-sensitive support vector machine. Engineering in medicine and biology society, 2009 EMBC 2009. In: Annual international conference of the IEEE 2009. p. 3322–5.
[3]Chen J, Yang Y, Yu T, et al. Brainnet: Epileptic wave detection from seeg with hierarchical graph diffusion learning[C]//Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022: 2741-2751.
[4]Zhang W, Ragab M, Foo C S. Domain Generalization via Selective Consistency Regularization for Time Series Classification[C]//2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022: 2149-2156.
[5]Mormann F, Andrzejak RG. Seizure prediction: making mileage on the long and winding road. Brain. 2016 Jun;139(Pt 6):1625-7.
[6]Truong N D, Nguyen A D, Kuhlmann L, et al. Integer convolutional neural network for seizure detection[J]. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2018, 8(4): 849-857.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my comments. The references considered in the revised version are not much relevant and most of them are old. It is suggested to check the literature review and consider some recent and relevant related studies (e.g., published work on related problem).
---
Reply to Comment 1.1.1:
Title: Response to the new comment
Comment: Thanks for the suggestions. We update our literature review with some recent and relevant related studies as follows:
**SEEG-based seizure detection.** SEEG is an emerging method applied in seizure detection, which can localize the SOZ more precisely than those noninvasive recording methods. However, due to the low-quality, large-amount, high-dimensionality characteristics of SEEG data, it is still challenging to develop an automatic approach in SEEG-based seizure detection. Ganti et al. [1] improve seizure detection by temporal Generative Adversarial Networks (TGAN). Chen et al. [2] adopt a graph structure to detect epileptic wave. Xiao et al. [3] propose an SOZ localization method via analyzing the long-term SEEG monitoring for preoperative planning of epilepsy surgery. Although researchers have explored some possible approaches for SEEG-based seizure detection, almost all of these works focus on a patient-specific setting, none of which can be applied in actual clinical scenarios.
**Domain generalization on brain signal.** Our goal is to predict epileptic seizures of SEEG from unseen patients, which can be abstracted as a domain generalization (DG) problem on time series. Conceptually, DG deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain. With the development of DG researches in the fields of computer vision and natural language processing, related problems on brain signal also attract many research interests. Yang et al. [4] develop a new domain generalization method ManyDG, that can scale to such many-domain problems for seizure detection task on EEG. Ayodele et al. [5] use transfer component analysis and LSTM to detect epilepsy on EEG data. Jeon et al. [6] propose a mutual information-driven method to conduct subject-invariant and class-relevant deep representation learning of EEG. For these current DG works on brain signal, most of them are conducted on EEG data rather than more informative SEEG. Although Wang et al. [7] study SEEG-based seizure detection on the patient-independent setting, they conduct experiments on datasets which are not only much smaller in size than practical records. The datasets are also manually denoised and sampled to a balanced positive-negative sample ratio which brings about a huge data bias from the real clinical data, indicating that their work is still far from clinical requirements.
Thanks again for your valuable suggestions which greatly improve our work.
References:
[1] Ganti B, Chaitanya G, Balamurugan RS, Nagaraj N, Balasubramanian K, Pati S. Time-Series Generative Adversarial Network Approach of Deep Learning Improves Seizure Detection From the Human Thalamic SEEG. Front Neurol. 2022
[2]Chen J, Yang Y, Yu T, et al. Brainnet: Epileptic wave detection from seeg with hierarchical graph diffusion learning[C]//Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022
[3]Linxia Xiao, Caizi Li, Yanjiang Wang, Junxi Chen, Weixin Si, Chen Yao, Xifeng Li, Chuanzhi Duan, and Pheng-Ann Heng. Automatic localization of seizure onset zone from high-frequency seeg signals: A preliminary study. IEEE Journal of Translational Engineering in Health and Medicine, 2021
[4] Yang, Chaoqi, M. Brandon Westover, and Jimeng Sun. "Manydg: Many-domain generalization for healthcare applications." *arXiv preprint arXiv:2301.08834*, 2023.
[5] Kayode Peter Ayodele, Wisdom O Ikezogwo, Morenikeji A Komolafe, and Philip Ogunbona. Supervised domain generalization for integration of disparate scalp eeg datasets for automatic epileptic seizure detection. Computers in Biology and Medicine, 120:103757, 2020.
[6] Eunjin Jeon, Wonjun Ko, Jee Seok Yoon, and Heung-Il Suk. Mutual information-driven subject invariant and class-relevant deep representation learning in bci. IEEE Transactions on Neural Networks and Learning Systems, 2021.
[7] Yiping Wang, Yanfeng Yang, Gongpeng Cao, Jinjie Guo, Penghu Wei, Tao Feng, Yang Dai, Jinguo Huang, Guixia Kang, and Guoguang Zhao. Seeg-net: An explainable and deep learning based cross-subject pathological activity detection method for drug-resistant epilepsy. Computers in Biology and Medicine, page 105703, 2022. | null | null | null | null |
Safety Verification of Decision-Tree Policies in Continuous Time | Accept (spotlight) | Summary: This paper introduces a novel verification algorithm that enables the verification of decision tree policies for continuous-time systems (also applicable to discrete-time systems). This approach ensures a sound and compact representation of reachable sets using Taylor models, which can be efficiently propagated through non-linear dynamics systems. The algorithm leverages the decision tree structure to propagate a set-based approximation of abstract reachable states through decision nodes, allowing for splits at decision boundaries. The effectiveness of this approach is demonstrated through the verification of safety for several decision trees that imitate neural-network policies in classic low-dimensional nonlinear control systems (e.g. cartpole). This method can provide robust safety and reachability guarantees for all system behaviors in these benchmarks.
Strengths: + The proposed algorithm marks a significant advancement as the first formal verification method for verifying reach-avoid properties in a decision-tree controlled system (DTCS) with continuous-time dynamics, including nonlinear systems.
+ It achieves this by efficiently propagating a set of states through the system dynamics and tailoring general hybrid system reachability verification specifically for decision trees. The algorithm effectively utilizes axis-aligned predicates and interval-based set approximations to enhance the precision of abstract states, mitigating the wrapping effect.
+ Moreover, the algorithm showcases its capability to generalize to unbounded time horizons by conducting reachable states fixpoint calculations, providing the potential for inductive safety verification over infinite time horizons.
+ The theoretical analysis conducted in this paper provides insights into the main algorithm.
The paper introduces a tool for ensuring safety and reachability guarantees in decision-tree controlled systems, pushing the boundaries of verification techniques for learning-enabled systems in the context of continuous-time dynamics.
Weaknesses: + The proposed method consists of two parameterized procedures: one for analyzing dynamical systems and another for decision tree policies. The verification algorithm for dynamical systems is well-established in the related literature. To enhance the clarity of the paper, it would be beneficial to treat the verification procedure as background information in a dedicated section, allowing the paper to primarily focus on its contribution to the analysis of decision tree policy reachability.
+ The evaluation presented in the paper falls short in adequately showcasing the limitations of the verification algorithm in practical settings. To provide a more comprehensive analysis, it would be valuable to explore scenarios where the reachability analysis may potentially diverge. For instance, investigating the scalability of the reachability analyzer to deep decision trees with a significant number of nodes would be helpful. Additionally, conducting an ablation study on decision trees with various shapes may provide insights into the algorithm's behavior.
+ Furthermore, the benchmarks used in the evaluation appear to be simple, and it seems that a linear controller could be trained directly using reinforcement learning for all of these benchmarks and verified using reachability analysis. To strengthen the evaluation, it would be beneficial to include a benchmark scenario where a linear controller fails, thus highlighting the necessity and added value of decision tree controllers that can be additionally verified. For example, cartpole swinging up could be a suitable benchmark to demonstrate the necessity of decision tree controllers.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: + Why does the cart-pole benchmark in the paper use a non-standard initial state set? Could the verification algorithm be applied to the cart-pole system using the initial state set provided in the Gym environment?
+ Is the verification algorithm capable of scaling to slightly more complex benchmarks, such as the pendulum and cart-pole swinging up tasks? Can it provide proof for the safety and reachability properties of these systems over an infinite time horizon?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The paper lacks a specific discussion on the scalability of the approach in terms of the size and complexity of control systems and decision trees that can be effectively verified. It would be helpful to address this aspect in the final version of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the excellent review.
> To enhance the clarity of the paper, it would be beneficial to treat the verification procedure as background information in a dedicated section, allowing the paper to primarily focus on its contribution to the analysis of decision tree policy reachability.
Thank you for the suggestion. We will restructure the paper accordingly.
> The evaluation presented in the paper falls short in adequately showcasing the limitations of the verification algorithm in practical settings. To provide a more comprehensive analysis, it would be valuable to explore scenarios where the reachability analysis may potentially diverge. For instance, investigating the scalability of the reachability analyzer to deep decision trees with a significant number of nodes would be helpful.
Our experiments already include a relatively large decision tree (depth 10, 177 nodes) for the drone quadrotor problem to show that large trees are also admissible, and as we argued in the paper, there exists a generic hybrid-system algorithm to verify DTCS, which is however impractical (Section 4.3). The main message of the paper is that a dedicated verification algorithm significantly outperforms such a generic algorithm.
Nevertheless we agree that scalability is an interesting question, however in reachability analysis evaluating scalability is not straight-forward, since neither does 1) the size of the tree nor 2) the dynamics of the system impact the algorithm in a predictable way. Below we give examples for both points.
1) The size of the tree is not necessarily dominating because multiple leaves may share the same action, or some leaves may not be used at all during the execution. What matters is the number of times the decision boundaries are partially crossed by the reachable states after each time step. Given that different controllers may implement different policies, we cannot simply compare analyses with controllers of different size, since the verification performance may even improve for larger controllers. For instance, please see the two alternative controllers for the acrobot system (Fig. 9 in the supplementary material), whose behaviors differ significantly (Fig. 6). Note that in general we are interested in smaller trees since this is one of the main motivations for using decision-tree controllers over other (learned) non-interpretable controllers.
2) It is hard to characterize the 'size' of a dynamical system. For instance, the competition on verifying continuous and hybrid systems (see [1*]) showcases that most verification solvers struggle with the *three-dimensional* Robertson chemical reaction, whereas the *seven-dimensional* Laub-Loomis model has several solvers that easily solve it. Note that this part is orthogonal to our work since our contribution is not in the algorithm for dynamical systems.
We will add this discussion to the paper.
[1*] Geretti et al.: ARCH-COMP22 Category Report: Continuous and Hybrid Systems with Nonlinear Dynamics. 2022.
> it seems that a linear controller could be trained directly using reinforcement learning
Thank you for the suggestion. We understand that this is only a suggestion to strengthen the story. Our acrobot benchmark requires a nonlinear controller. Note that our acrobot is similar to an up-swinging cart pole since the acrobot starts in a downwards position. It is true that some of the benchmarks admit a linear controller, however, obtaining suitable controllers is not the focus of our work.
> Why does the cart-pole benchmark in the paper use a non-standard initial state set? Could the verification algorithm be applied to the cart-pole system using the initial state set provided in the Gym environment?
The set is a union of two sets. The reason we added this set was simply to demonstrate that our approach can naturally deal with non-convex shapes. The first set is similar to the set from the Gym environment. Adding the second set makes the union larger (and hence makes the problem harder).
We re-ran the experiment using the set from the Gym for the first set (i.e., still a harder challenge) and obtained a very similar result as before without any noticeable impact on reachable states and verification time. In the final version, we can use this refined initial set.
> Is the verification algorithm capable of scaling to slightly more complex benchmarks, such as the pendulum and cart-pole swinging up tasks? Can it provide proof for the safety and reachability properties of these systems over an infinite time horizon?
The acrobot benchmark from our evaluation is similar to a pendulum or an up-swinging cart/pole system. Thus we expect that our algorithm may be able to provide a proof, provided the controller is safe and sufficiently stable.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thanks for the detailed response. I find it difficult to draw a parallel between the acrobot benchmark and the cart pole swing-up problem as they differ significantly in their requirements. The acrobot benchmark only requires reaching a goal position, while the cart pole swing-up problem demands both reaching a vertical position and maintaining balance. The cart pole problem does require a nonlinear controller, but the acrobot task can be tackled with a simple linear one for reachability only. My concern remains about the simplicity of the benchmarks used for evaluation. This raises questions about whether the observed good performance is due to the lack of significant nonlinearity in the learned controllers.
The cart pole swing-up problem not only serves as a suitable benchmark to showcase the importance of decision tree controllers but also points to a more difficult verification issue – how can your approach be used to verify controller correctness over an infinite time horizon?
I kindly request the authors to acknowledge these limitations in their paper or provide evidence demonstrating that their tool is indeed capable of verifying the full correctness of a cart pole swing-up controller.
---
Reply to Comment 1.1.1:
Title: Follow-up on swing-up model
Comment: Thank you for the clarification. We agree that, while the dynamics of the swing-up cart/pole system and the acrobot system are similar, the control tasks differ.
We do not see a fundamental reason why nonlinearity of the controller would make the _verification_ task inherently more difficult. Note that decision-tree controllers are nonlinear controllers.
Regarding your question about the swing-up cart/pole system, we agree that it would be interesting to include this benchmark in addition to the four benchmarks considered in our work. Intuitively it should be possible to verify, given that up-swinging cart/pole becomes a standard cart/pole after the pole is brought into the upward position, and we verified both acrobot and cart/pole.
We considered experimenting with swing-up cart/pole system during this short rebuttal phase, but did not manage to set it up due to some numerical stability issues that go beyond the contributions of our work (see below for details).
In the camera-ready version we will add a discussion about the limitations you raised.
---
### Issues with the implementation of the swing-up cart/pole system
We had to adapt the differential equations wrt. the model in the paper to faithfully model the swing-up phase. For that, we found the system dynamics online and trained a decision tree that seemingly manages to do the control (based on simulations from selected points, but not verified). However, the new differential equations are more complex than what we used in our cart/pole system, and it turned out that the Taylor-model algorithm struggles with this model. Although the Taylor-model algorithm can maintain precision for most of the up-swing phase, at some point it starts diverging in our implementation.
The issue with approximation error even arises when starting from a single point - for which no splitting occurs in our algorithm, and thus our algorithm does not add any approximation error. Hence these numerical issues would occur for any algorithm that uses a Taylor-model algorithm for the continuous-time reachability analysis (which is out of scope for our paper).
We will consider modelling the problem using a simpler set of differential equations that could still reasonably well represent the system, but we did not manage to do this during the rebuttal phase. | Summary: The paper presents an approach for verifying safety (reach-avoid) properties of controlled systems where the state space of the system is continuous, its dynamics are continuous-time, and the policy/controller is described by a decision tree that chooses actions from a continuous action space. The proposed reachability-based verification algorithm invokes two main sub-procedures $post_f$ and $post_{\mathcal{T}}$. Given an initial set of states and an action, $post_f$ computes the reachable states induced by the dynamics using standard techniques based on Taylor models. On the other hand, $post_{\mathcal{T}}$ uses the structure of a decision tree to compute an overapproximation of all possible state-action pairs induced by the policy given an initial set of states. The design of $post_{\mathcal{T}}$ as well as the overall verification algorithm are the primary contributions of this work. The paper also includes theoretical results establishing the soundness and the relative completeness of the verification algorithm, as well as results establishing the computational hardness of the safety verification problem for systems with decision tree controllers. Moreover, empirical evaluation shows the scalability of the verification algorithm compared to other, more generic techniques.
Strengths: The paper presents an elegant verification algorithm as well as establishes hardness results for an interesting, well-specified family of controlled systems. The overall verification algorithm is clean and generic, and provides precise specifications that need to be satisfied by the sub-procedures $post_f$ and $post_{\mathcal{T}}$ for the overall algorithm to be a sound and complete verifier. The key idea for computing reachability through the decision tree policy---namely, using box abstractions for efficient calculation of inclusion checks and bisection operation---is sensible and intuitive, and suggests clear directions for future research. The paper also presents useful insights on extending the analysis to unbounded time settings. The empirical evaluation supports the claim that designing a verification procedure that exploits the structure of Decision Tree Controlled Systems (DTCS) can lead to efficiency wins. I also greatly appreciate the quality of the writing and the formal rigor of the presented ideas (barring a few concerns described below).
Weaknesses: My primary concerns have to do with the presentation of the technical material and the comparison with related work.
1. The paper repeatedly suggests that exploiting the special structure of DTCS for verification is one of the key insights of the presented work but a clear explanation of this special structure and how it is exploited is only cursorily explained in Section 4.3 (with a more detailed explanation in the appendix). I am not an expert on the topic and for readers like me, it would help if these structural insights were clearly explained in the initial sections of the paper.
2. I find the presented ideas about generalizing to unbounded time via fixpoints very interesting. However, the text in Section 3.2 seemed handwavy, and I would very much appreciate a more precise explanation of the ideas. For instance, I found the comments on line 226 ("However, due to the discrete ...") and on line 230 quite cryptic. I am also confused about the fact that while $post_f$ takes both a set of states and a time interval as input parameters, the fixed point check only considers the input states (i.e., assumes time-invariant dynamics). If the dynamics is time-invariant, why does $post_f$ need $t_0$ and $t_1$ as parameters? Can $t_0$ and $t_1$ be fixed as 0 and $\tau$?
3. The evaluation would be much stronger if there were a head-to-head comparison of the algorithm in this paper with reachability tools for hybrid automata. Section 4.3 briefly touches upon this, but a direct comparison for all four examples would be quite instructive. Another minor comment is that given the amount of detail in Figures 4,5, and 6, it would help to make them higher resolution.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: My questions are related to comments above in the Weaknesses section.
1. What structure of decision trees does the algorithm exploit? Is the structure primarily used in the design of Algorithm 3, or does Algorithm 1 also exploit the structure?
2. Repeating the second comment above, the fixed point check seems to assume time-invariant dynamics. Is this correct? If so, why pass $t_0$ and $t_1$ as parameters to $post_f$ instead of simply passing $\tau$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The paper does not explicitly discuss the limitations and might benefit from a small discussion about the same. For instance, a discussion about the scalability of the approach and loss of precision due to approximations (related to the discussion in the last para of the conclusion) would be helpful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the excellent review.
> exploiting the special structure of DTCS for verification is one of the key insights of the presented work but a clear explanation of this special structure and how it is exploited is only cursorily explained.
Thank you for the suggestion. We will improve the explanation. For context, remember that in our setting, Algorithm 3 receives a set of states X, which is then propagated down the branches of the decision tree. Compared to analyzing a DTCS with a generic hybrid-system tool, the most important aspects of the special structure are as follows.
(1) The decision tree partitions the set X. In particular, each predicate either cuts X in two or keeps it intact. Furthermore, if X satisfies a DT predicate, we do not have to explore the complement branch. In a general hybrid system, there is no "else case" and the conditions may overlap; hence the (expensive) intersection computations have to be performed repeatedly and the complement branch has to be analyzed as well.
(2) If the set X only leads to leaves with the same action, we do not need to split it (this happens quite often), which avoids the main source for loss of precision altogether. In a general hybrid system, combined with aspect (1) above, the connection is lost.
(3) Axis-aligned predicates allow to use (simple but efficient) box abstractions.
> Is the structure primarily used in the design of Algorithm 3, or does Algorithm 1 also exploit the structure?
Algorithm 1 does not exploit the structure of decision trees (it is actually generic in the type of controller). But it exploits the structure of a periodic control system: Algorithm 3 only has to be called at fixed points in time, and Algorithm 2 has to be run for a fixed amount of time.
> I find the presented ideas about generalizing to unbounded time via fixpoints very interesting. However, the text in Section 3.2 seemed handwavy
Thank you for the feedback. This section is indeed a very brief overview. The discussion is considered folklore in the reachability community, and the reason we included it is to be self-contained with respect to the experiments. In short, once the system finds itself in a state that was already analyzed previously, we can conclude that the system has reached a fixed point. This observation naturally generalizes to sets of states. We will make the description more formal.
> The evaluation would be much stronger if there were a head-to-head comparison of the algorithm in this paper with reachability tools for hybrid automata [...] a direct comparison for all four examples would be quite instructive.
Thank you for the suggestion. The other examples could also not be solved by the generic approach. We agree that it would be interesting to provide further evidence, however (1) conceptually our approach is more efficient than a generic approach (explained above), and (2) since the generic approach could not provide any results, a tabular comparison is not possible.
> given the amount of detail in Figures 4,5, and 6, it would help to make them higher resolution.
Agreed.
> the fixed point check seems to assume time-invariant dynamics. Is this correct? If so, why pass $t_0$ and $t_1$ as parameters to $post_f$ instead of simply passing $\tau$?
Indeed, we consider time-invariant systems. Yes, we could pass tau instead most of the time, except potentially in the last time interval. The correct value is t1-t0. We can see that passing the time interval can lead to confusions. Since t1-t0 is unnecessarily complex, we will instead pass tau as suggested and assume that T is a multiple of tau to simplify the presentation.
> The paper does not explicitly discuss the limitations and might benefit from a small discussion about the same.
Our experiments already include a relatively large decision tree (depth 10, 177 nodes) for the drone quadrotor problem to show that large trees are also admissible, and as we argued in the paper, there exists a generic hybrid-system algorithm to verify DTCS, which is however impractical (Section 4.3). The main message of the paper is that a dedicated verification algorithm significantly outperforms such a generic algorithm.
Nevertheless we agree that scalability is an interesting question, however in reachability analysis evaluating scalability is not straight-forward, since neither does 1) the size of the tree nor 2) the dynamics of the system impact the algorithm in a predictable way. Below we give examples for both points.
1) The size of the tree is not necessarily dominating because multiple leaves may share the same action, or some leaves may not be used at all during the execution. What matters is the number of times the decision boundaries are partially crossed by the reachable states after each time step. Given that different controllers may implement different policies, we cannot simply compare analyses with controllers of different size, since the verification performance may even improve for larger controllers. For instance, please see the two alternative controllers for the acrobot system (Fig. 9 in the supplementary material), whose behaviors differ significantly (Fig. 6). Note that in general we are interested in smaller trees since this is one of the main motivations for using decision-tree controllers over other (learned) non-interpretable controllers.
2) It is hard to characterize the 'size' of a dynamical system. For instance, the competition on verifying continuous and hybrid systems (see [1*]) showcases that most verification solvers struggle with the *three-dimensional* Robertson chemical reaction, whereas the *seven-dimensional* Laub-Loomis model has several solvers that easily solve it. Note that this part is orthogonal to our work since our contribution is not in the algorithm for dynamical systems.
We will add this discussion to the paper.
[1*] Geretti et al.: ARCH-COMP22 Category Report: Continuous and Hybrid Systems with Nonlinear Dynamics. 2022.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the detailed response to my questions. This is a nice piece of work and I will keep my score. | Summary: The paper puts forward a verification method for decision tree-based systems in
continuous time. The method implements a reachability algorithm that computes
over-approximations of the set of reachable states for a sequence of time
intervals until a time horizon is reached. The approximations at each step are
derived using Taylor models.
Strengths: While the contribution is based on standard reachability-based verification
methods for hybrid and neural network-driven systems, the development of said
methods to verify decision trees is novel.
Good experimental evaluation which shows improvements over the state-of-the-art
in reachability for hybrid automata. This can in my view further improved by
including the following:
1. a discussion of the scalability of the method with respect to a wider
range of model sizes.
2. a discussion of the comparative amenability to reachability-based
verification of decision tree-based and neural network-based systems.
Weaknesses: The paper does not include a discussion on the scalability of the proposed
method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have no questions. I thank the authors for the clarity of the text.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the excellent review.
> The paper does not include a discussion on the scalability of the proposed method.
Our experiments already include a relatively large decision tree (depth 10, 177 nodes) for the drone quadrotor problem to show that large trees are also admissible, and as we argued in the paper, there exists a generic hybrid-system algorithm to verify DTCS, which is however impractical (Section 4.3). The main message of the paper is that a dedicated verification algorithm significantly outperforms such a generic algorithm.
Nevertheless we agree that scalability is an interesting question, however in reachability analysis evaluating scalability is not straight-forward, since neither does 1) the size of the tree nor 2) the dynamics of the system impact the algorithm in a predictable way. Below we give examples for both points.
1) The size of the tree is not necessarily dominating because multiple leaves may share the same action, or some leaves may not be used at all during the execution. What matters is the number of times the decision boundaries are partially crossed by the reachable states after each time step. Given that different controllers may implement different policies, we cannot simply compare analyses with controllers of different size, since the verification performance may even improve for larger controllers. For instance, please see the two alternative controllers for the acrobot system (Fig. 9 in the supplementary material), whose behaviors differ significantly (Fig. 6). Note that in general we are interested in smaller trees since this is one of the main motivations for using decision-tree controllers over other (learned) non-interpretable controllers.
2) It is hard to characterize the 'size' of a dynamical system. For instance, the competition on verifying continuous and hybrid systems (see [1*]) showcases that most verification solvers struggle with the *three-dimensional* Robertson chemical reaction, whereas the *seven-dimensional* Laub-Loomis model has several solvers that easily solve it. Note that this part is orthogonal to our work since our contribution is not in the algorithm for dynamical systems.
We will add this discussion to the paper.
[1*] Geretti et al.: ARCH-COMP22 Category Report: Continuous and Hybrid Systems with Nonlinear Dynamics. 2022.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for the detailed response which addresses my concern on scalability. My view of the paper remains positive. | Summary: This paper presents a method to solve the reach-avoid problem for dynamical systems controlled by a decision tree in continuous time. The authors assert that this is the first paper to solve the problem in the continuous time setting. In the paper, the authors first provide a good overview of decision trees, the use of decision trees as neural network surrogates, and verification of decision tree controlled systems (DTCS). They also explain why tools developed for verification of neural network controlled systems are not directly applicable to DTCS. The authors then explain their approach, which is based on a typical reachability algorithm and determines the set of reachable states for an iteration of the control loop.
Strengths: 1. The problem of DTCS verification is interesting and well motivated.
2. The paper is clearly and concisely written, and the justification of the paper as described is sound. The authors describe in great detail how their algorithms satisfy the equations necessary for the reach-avoid problem in DTCS. The authors also provide evaluations of their approach using multiple benchmark problems.
Weaknesses: 1. The novelty of the approach is not adequately explained. I believe that it is novel, but the authors need to explicitly state which parts of the algorithm are specifically designed for use with DTCS and which are just adapted from the typical reachability algorithm.
2. Section 3.2, which describes the generalization of the solution to unbounded time problems, could perhaps use more details.
3. The authors should also include some statement on how common axis-aligned predicates may be found in real-world DTCS. Is this a reasonable assumption?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the comments above on weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper could use more discussion on its limitations, particularly regarding scalability. The work does not have obvious negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the excellent review.
> The novelty of the approach is not adequately explained. I believe that it is novel, but the authors need to explicitly state which parts of the algorithm are specifically designed for use with DTCS and which are just adapted from the typical reachability algorithm.
Algorithm 3 as well as Theorem 2 are completely new. We will restructure the paper to clarify this point. Compared to analyzing a DTCS with a generic hybrid-system tool, the most important aspects of the special structure our algorithm allows to exploit are as follows.
(1) The decision tree partitions the set X. In particular, each predicate either cuts X in two or keeps it intact. Furthermore, if X satisfies a DT predicate, we do not have to explore the complement branch. In a general hybrid system, there is no "else case" and the conditions may overlap; hence the (expensive) intersection computations have to be performed repeatedly and the complement branch has to be analyzed as well.
(2) If the set X only leads to leaves with the same action, we do not need to split it (this happens quite often), which avoids the main source for loss of precision altogether. In a general hybrid system, combined with aspect (1) above, the connection is lost.
(3) Axis-aligned predicates allow to use (simple but efficient) box abstractions.
> Section 3.2, which describes the generalization of the solution to unbounded time problems, could perhaps use more details.
Thank you for the feedback. This section is indeed a very brief overview. The discussion is considered folklore in the reachability community, and the reason we included it is to be self-contained with respect to the experiments. In short, once the system finds itself in a state that was already analyzed previously, we can conclude that the system has reached a fixed point. This observation naturally generalizes to sets of states. We will make the description more formal.
> The authors should also include some statement on how common axis-aligned predicates may be found in real-world DTCS. Is this a reasonable assumption?
Axis-aligned predicates are very standard, also for DTCS. The tool Uppaal Stratego (David et al. 2015) learns such DT controllers, which are applied in industrial applications (e.g., to control smart homes [1*] and traffic lights [2*]). The tool dtControl (Ashok et al. 2020) also uses axis-aligned predicates.
> The paper could use more discussion on its limitations, particularly regarding scalability.
Our experiments already include a relatively large decision tree (depth 10, 177 nodes) for the drone quadrotor problem to show that large trees are also admissible, and as we argued in the paper, there exists a generic hybrid-system algorithm to verify DTCS, which is however impractical (Section 4.3). The main message of the paper is that a dedicated verification algorithm significantly outperforms such a generic algorithm.
Nevertheless we agree that scalability is an interesting question, however in reachability analysis evaluating scalability is not straight-forward, since neither does 1) the size of the tree nor 2) the dynamics of the system impact the algorithm in a predictable way. Below we give examples for both points.
1) The size of the tree is not necessarily dominating because multiple leaves may share the same action, or some leaves may not be used at all during the execution. What matters is the number of times the decision boundaries are partially crossed by the reachable states after each time step. Given that different controllers may implement different policies, we cannot simply compare analyses with controllers of different size, since the verification performance may even improve for larger controllers. For instance, please see the two alternative controllers for the acrobot system (Fig. 9 in the supplementary material), whose behaviors differ significantly (Fig. 6). Note that in general we are interested in smaller trees since this is one of the main motivations for using decision-tree controllers over other (learned) non-interpretable controllers.
2) It is hard to characterize the 'size' of a dynamical system. For instance, the competition on verifying continuous and hybrid systems (see [3*]) showcases that most verification solvers struggle with the *three-dimensional* Robertson chemical reaction, whereas the *seven-dimensional* Laub-Loomis model has several solvers that easily solve it. Note that this part is orthogonal to our work since our contribution is not in the algorithm for dynamical systems.
We will add this discussion to the paper.
[1*] Larsen et al.: Online and Compositional Learning of Controllers with Application to Floor Heating. 2016.
[2*] Bilgram et al.: Online and Proactive Vehicle Rerouting with Uppaal Stratego. 2020.
[3*] Geretti et al.: ARCH-COMP22 Category Report: Continuous and Hybrid Systems with Nonlinear Dynamics. 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my comments. I am satisfied with the response and will raise my score accordingly. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Doubly Constrained Fair Clustering | Accept (poster) | Summary: The paper investigates the fair $k$-center problem and considers the combination of two fairness notions: Group Fairness (GF) and Diversity in Center Selection (DS). The authors show that a constant approximation algorithm for one constraint (GF or DS only) can be extended to a constant approximation algorithm for both constraints simultaneously. Moreover, the authors show that both GF and DS are incompatible with a collection of other distance-based fairness notions.
Strengths: - Due to the importance of fair clustering and multiple fairness notions, it is interesting to study the relationship between different fairness notions.
- The theoretical results show a separation between DP notions (GF and DS), and distance-based notions, which look interesting to me.
Weaknesses: - The studied objective is fair $k$-center, which is less used in machine learning. The extension of the paper to fair $k$-median and fair $k$-means would be more interesting.
- It lacks a discussion of the limitations and future works.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could you say anything about fair $k$-median/means?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No, the authors should include a discussion on limitations and societal impact.
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback.
> What about $k$ median and means?
We would like to note that DS for the k-median and k-means was only solved very recently in the paper “Approximation Algorithms for Fair Range Clustering” which appeared in ICML 2023 (a couple of months ago). Therefore, at the time of the writing the problem seemed premature to fully consider.
We note that there have been papers in fair clustering focused on a specific clustering objective such as the k-center (please see e.g. Kleindessner et al 2019, Chiplunkar et al 2020, Jones et al 2020, Harb and Lam 2020) so we do not think that this should be seen as a shortcoming especially given the novelty of the problem which for the first time considers multiple fairness constraints simultaneously in clustering.
Furthermore, we have in fact thought about generalizing the current approach to $k$-median and $k$-means as well. One issue is that not all centers would be active (being assigned a non-empty cluster as described in Section 4.1) and at the same time satisfy GF. However, we believe it is a very interesting problem for future work.
### References:
M. Kleindessner, P. Awasthi, and J. Morgenstern, ‘‘Fair k-center clus- tering for data summarization,’’ in Proc. Int. Conf. Mach. Learn., 2019, pp. 3448–3457.
A. Chiplunkar, S. Kale, and S. N. Ramamoorthy, ‘‘How to solve fair k- center in massive data models,’’ in Proc. Int. Conf. Mach. Learn., 2020, pp. 1877–1886.
M. Jones, H. Nguyen, and T. Nguyen, ‘‘Fair k-centers via maximum matching,’’ in Proc. Int. Conf. Mach. Learn., 2020, pp. 4940–4949. | Summary: This paper studies how to combine several notions of fairness together in one clustering. Fairness is a popular notion in the context of clustering, however most of previous works had focused on a single notion of fairness at once. This paper studies two specific notions of fairness which are called group fair clustering (GF), and diverse center selection (DF). They give various algorithms that give a constant factor approximation to the k-center problem under the GF+DS constraints (i.e. the clustering has to satisfy both DS and GF constraints). They also study the relationship of these two notions of fairness to other notions that also appear in the literature. More precisely, they show the following.
1) From a GF fair solution which is a constant factor to the GF problem, one can obtain a constant factor approximation to the GF+DS objective (in polynomial time), assuming a slight violation of the GF constraints.
2) Similarly, from a DS fair solution which is a constant factor to the DS problem, one can obtain a constant factor approximation to the GF+DS objective (in polynomial time), assuming a slight violation of the GF constraints.
3) It is also shown that the DS and GF objectives are both incompatible with several other notions of fairness.
4) They validate their algorithms with experiments.
Strengths: I think this is a conceptually interesting paper. To the best of my knowledge, this is the first paper that combines different fairness criteria together.
Weaknesses: Althought the results are nice, the techniques do not seem very novel.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Line 2 in pseudocode page 6: the output is stated to satisfy the GF fairness constraint but in fact there is an additive violation. This happens in several other places where the reader first thinks that GF constraint are satisfied exactly without violation. I would encourage to rephrase a little bit.
Typos/minor comments:
Line 138: extra k in equation
Line 211: guarantees
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. We will fix the typos in lines (138, 211). We will also be explicit about the violation in GF. Furthermore, we believe we presented an elegant solution to the problem and that our modular approach of applying GF or DS algorithms to finally solve both GF+DS can inspire similar future work that considers multiple constraints simultaneously in fair clustering.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and clarifications. After reading the rebuttal and other reviews, my assessment has not changed. | Summary: The paper investigates the relationship and intersection between two constraints for fairness clustering: Group Fair, in which populations should be fairly represented in each cluster, and Diversity Selection, in which centers should be fairly selected. It is shown that a solution for one of the constraints can be post-processed into a solution for the other with a constant degradation in results, but also that both constraints are incompatible with other, distance-based fairness constraints introduced in the literature.
Strengths: The theoretical results are very interesting, in terms of showing how it is possible to achieve (with some degradation) both kinds of fairness in clustering but also, and perhaps even more interestingly, when showing how these constraints are incompatible with other fairness settings.
Weaknesses: Experimental results are somewhat weak, with a single dataset being analyzed in the main text and an additional one presented in the supplementary material. The running time of the post-processing method to adapt a DS-compliant solution into a GF+DS-compliant one is very high when compared to the running time of the DS algorithm, but this is only mentioned in the supplementary material as well.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Do you believe the same results would be replicated for other datasets?
- What would happen (in terms of guarantees, results, running times) if the constraints must be guaranteed for more than two groups?
- Were you able to verify the claim in the supplementary material that the overhead in the DS to GF+DS algorithm is due to the necessity of solving an LP?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I believe the running time issues should be discussed in the main body of the paper. If there is space for it, it would be interesting to also expand on the consequences of these constraints not being compatible with distance-based ones.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback.
> Do you believe the same results would be replicated for other datasets?
We have run the dataset over a reasonable number of datasets as followed in previous fair clustering papers (see e.g, Kleindessner et al 2019, Esmaeili et al 2021). Furthermore, we have added a third dataset (Diabetes), please see the message above and the attached pdf.
> Run-time issue
Before we discuss the run-time it is important to note that the **DS** algorithm we use is from the recent publication of Nguyen et al, 2022. The original run-time of **DS** was in fact $O(n^2)$ (see Chen et al, 2016 “Matroid and knapsack center problems”) and was improved to $O(nk)$ recently by Nguyen et al, 2022 for the general **DS** setting. So a contributing factor was the fact that the **DS** the algorithm we use is very fast, hence not representing a bottleneck. Had we used Chen et al, the conclusion would’ve been very different.
Second, we would argue that there is not really a run-time issue. Since the constraint involves **GF+DS** we are lower bounded in terms of run-time by the max run-time of **GF** and **DS**. The incremental run-time over the GF algorithm is completely negligible (Figure 14), therefore our final solution is not time-consuming. Further, the final run-times shown in Figure 16 are within what is expected for a solution satisfying GF in the literature. So the time for **DS → DS+GF** is not more than what would be considered reasonable.
Our new results on the Diabetes dataset show similar conclusions. We cannot think of a reason why the results would not hold over other datasets. LP based methods tend to be more time consuming than other combinatorial ``non-LP’’ approaches. This is not a general rule since a non-LP method could possibly have $O(n^5)$ run-time. But for example our combinatorial approach for **GF→ GF+DS** is indeed quite efficient and can be shown to run in linear time.
It is not trivial to satisfy the **GF** for arbitrary bounds through a purely combinatorial approach since the **GF** constraint is in some sense more complicated than **DS** and has to take into account the colors of all of the points in the dataset, not just the selected centers.
Please also see the last section in the response to reviewer Q947 about the run-time as it may help.
> What would happen (in terms of guarantees, results, running times) if the constraints must be guaranteed for more than two groups?
Our algorithms and theoretical guarantees hold for multiple colors. In Appendix F, the census dataset uses more than two groups. Only in Sections 5 and 6 do we show results on two colors, but these results are impossibility results not guarantees of an algorithm. Could the reviewer please clarify this point since can already accommodate more than two groups?
### References:
M. Kleindessner, P. Awasthi, and J. Morgenstern, ‘‘Fair k-center clustering for data summarization,’’ in Proc. Int. Conf. Mach. Learn., 2019, pp. 3448–3457.
S. A. Esmaeili, B. Brubach, A. Srinivasan, and J. P. Dickerson, ‘‘Fair clustering under a bounded cost,’’ 2021, arXiv:2106.07239.
Chen, D. Z., Li, J., Liang, H., and Wang, H. Matroid and knapsack center problems. Algorithmica, 75:27–52, 2016.
Nguyen, H. L., Nguyen, T., and Jones, M. Fair range k- center. arXiv preprint arXiv:2207.11337, 2022. | Summary: This paper considers two common notions of fairness in clustering: (I) Group Fairness (GF) and, (II) Diversity in Data Selection (DS). The authors show how to boost an approximate algorithm that satisfies only GF/DF to an approximate algorithm that satisfies them both (with constant violations and constant times cost enlargement). Experiments also support the main claims of the paper.
GF and DF are two popular notions to capture group fairness, I am happy to see that actually they can mostly be satisfied simultaneously without paying much effort. I think the results are an interesting addition to the study of fair clustering.
Strengths: 1. The first results link different notions of group fairness constraint in clustering.
2. Algorithms are easy to follow.
Weaknesses: 1. The algorithms heavily depend on existing approximate algorithms for clustering with GF or DF. So the result should mostly be regarded as an enhancement of the existing algorithms.
2. The simultaneous guarantees may violate some group constraints. I am worried that in some cases this feature actually contradicts fairness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is your running time of algorithms in Theorem 4.1 and 4.2?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The algorithms may violate some groups' fairness constraints in the worst case. Will it be an issue in some cases?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for making these points.
> The algorithms heavily depend on existing approximate algorithms for clustering with GF or DF. So the result should mostly be regarded as an enhancement of the existing algorithms.
There are a collection of technical details which distinguish our work. First, the notion of active centers (forming non-empty clusters) mentioned in Section 4.1 is trivial to satisfy in **DS** and not needed for **GF**. But once both **GF** and **DS** are considered simultaneously it has to be satisfied but is also not trivial to satisfy since the **GF** constraint implies that points of different colors have to be assigned in the correct proportions to the center. Furthermore, our **Divide** subroutine (Section 4.2) is carefully designed to run in linear time (this is not difficult to see but we will add this and emphasize it in the paper).
Furthermore, since our setting generalizes previously introduced problems, it is perhaps not surprising that it borrows elements from the **GF** and **DS** literature. We consider that to be an advantage since our final algorithm is built in a modular manner which is a much preferable algorithmic design approach. Further, the method of using existing algorithms as subroutines to handle a more complicated variant of the problem appears frequently in the literature. In fact, some **GF** and **DS** algorithms involve a step where an unconstrained (agnostic) clustering algorithm is used as a subroutine and where the output is then post-processed to satisfy **GF** or **DS** (see e.g, Kleindessner et al 2019, Bera et al 2019, Ahmadian 2019, Esmaeili et al 2020).
> The simultaneous guarantees may violate some group constraints. I am worried that in some cases this feature actually contradicts fairness.
> The algorithms may violate some groups' fairness constraints in the worst case. Will it be an issue in some cases?
We note that in many fairness papers the fairness constraint might be slightly violated. Note for example some common notions like envy-free-up-to-1-element (EF1) or near-feasible stable matching with couples (Nguyen and Vohra, 2018) where the violation is at most 4 couples, and stability is the fairness notion.
In our setting we only have a violation for GF but it is always bounded by at most 3 in the worst case. There have been papers for the GF constraint with a similar guarantee. In general, the guarantee is not consequential since a violation of only 3 points is negligible if the size of the cluster is large (consider a cluster of 1,000 points). In fact, if this small violation of 3 points causes an issue, then it would imply that there is a cluster of a very small size. In most practical settings the existence of a very small cluster likely implies that $k$ (number of clusters) was chosen to be too high. Finally, we are not the first to consider such a bounded violation in **GF**, see for example Bercea et al 2018. Finally, we note that the recent work of Hotegni et al in ICML 2023 has considered violations in **DS** for the $k$-median and means whereas we satisfy them without any violation.
> What is your running time of algorithms in Theorem 4.1 and 4.2?
For Theorem 4.1, note that it uses an LP based algorithm. In general, when LP based methods are used, papers tend not to mention the exact run-time or simply note that it is polynomial. An issue lies in the fact that the optimal run-time for solving LPs remains a topic of ongoing research. Using a known result by Vaidya et al , 1989 would lead to a run-time of $O((nk)^{2.5})$ using his algorithm. However, we use the Simplex algorithm as done in most settings. While the worst case run-time of the Simplex algorithm is exponential, empirically it is very fast. We note that the impressive run-time of the Simplex algorithm had generated interesting work in beyond the worst case analysis. Please see the work of Spielman and Teng on smoothed analysis. One can get an idea about the run-time of the LP from its size (number of variables and constraints).
We will now give a more detailed description about Thorem 4.1’s run-time. In **AssignmentGF** the LP has a total of $nk$ variables with the number of constraints being at most $O(nk)$. The size of the LP is close to or better than some LPs in clustering that can be as large as $O(n^2)$. Note that we use binary search over the values in distance matrix, so we run the LP algorithm $O(log_{}{n})$ which is again standard for the $k$-center problem. Further, the **MaxFlowGF** algorithm is a special construction of the max flow problem with $|V|=O(n)$ and $|E|=O(nk)$. Using a standard algorithm such as the Ford-Fulkerson or Edmonds-Karp would lead to a run-time of $O(n^2 k)$. We note that this is not slower than previous methods in fair and constrained clustering. The rest of the steps in algorithm 2 are bounded by $O(n)$. Please Appendix F for empirical run-times.
Theorem 4.2 using algorithm 4 takes $O(n)$ a major ingredient in this fast run-time is our efficient **Divide** subroutine. We will add notes about this in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for your detailed response! | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful reading and constructive criticism.
## Experiments and Datasets:
We note that we have tested our algorithms on two datasets from the UCI repository: Adult shown in Section 7 and Census1990 shown in Appendix F. Further, please see the message below with the attached PDF for new results on the UCI Diabetes dataset. We find that the results for Diabetes are similar qualitatively with the Adult and Census1990.
## Ethical and Limitation Concerns
The reviewers have raised important points about the limitations of our work and its societal impact. Following the suggestion of reviewer SpBP, we propose the following draft below to be added to the paper to discuss ethical issues and possible limitations:
``Similar to the vast literature in fair clustering, our work is mainly theoretical. We note that Group Fairness and Diversity in Center Selection are two of the most common fairness formalizations in clustering that operationalize legal doctrine such as the notion of disparate impact. There are other fairness formalizations proposed by the fair machine learning community. However, there is still possibly a gap between fairness notions proposed by the research community and industry practitioners in a particular application (see, e.g. Holstein et al. 2019). And in some cases, the stakeholders and practitioners may not fully discern between fair approaches. The application of our algorithms and their effects in a given application would (as most settings in fairness) be a complicated enterprise which should involve significant investigation into the model and the implications of the algorithms as well as consultation with stakeholders.
It is possible that a naive application of a fair algorithm that does not take sufficient considerations into account would result in possible harm. This is not unique to our setting or even fair clustering in general (see e.g., Liu et al, ICML 2018). As an example for possible harm caused by some of the fairness notions we considered, consider the application of **GF** clustering in a biomedical setting. It is possible that agnostic (color-blind) clustering would show correlations between some clusters and a group membership like race. I.e., some groups might be over and/or under represented in some clusters. Through further analysis this cluster might be discovered to be associated with a bad outcome and therefore some groups might be more susceptible than others to this bad outcome. A decision maker would consider this an important discovery since a certain group is overrepresented in the bad outcome cluster. However, the application of fair clustering would make all groups represented in each cluster in close to population level proportions. Therefore, the application of **GF** clustering might not be suited for such an application. One can devise similar examples for **DS** as well. Furthermore, if the decision maker is certain that only **GF** or **DS** is needed then **GF+DS** would not give an advantage and possibly only degrade the clustering cost“.
## Additional Experiments on the Diabetes Dataset
We show additional experimental results on the Diabetes dataset from the UCI repository. This dataset was used in Backurs et al ICML 2019 and Chierichetti et al NeurIPS 2017, we take a sample of 20,0000. Note that only use 1,000 points were used in both papers. Similar to what was done in Backurs et al we use ''age'' and ''time in hospital'' as the numeric entries where the group membership is ''gender'' which has two values in this dataset. Further, we follow a similar setting to that in the experiments section (Section 7 and F). Specifically, for **GF** the lower and upper proportion bounds for any color $h$ is set to $\alpha_h = (1+\delta) r_h$ and $\beta_h = (1-\delta) r_h$ with $\delta =0.05$. Further, for the **DS** constraints we set $k^l_h = \lceil{\theta r_h k}$ where $\theta \in [0,1]$ and $k^u_h=k$ for every color $h$ with $\theta=0.9$. The results are shown in the figures in the PDF.
Figure 1 shows the **PoF**, **GF-Violation**,and **DS-Violation** similar to previous experiments. We do not see a qualitative change
Figure 2 shows the incremental time. Similar to what we found before the incremental run-time for **GF** is small while it is not for **DS**.
Finally Figure 3 shows the full runtime for both **GF+DS** algorithms and find like on previous datasets that they are comparable.
### References:
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daum ́e III, Miro Dudik, and Hanna Wallach. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the Conference on Human Factors in Computing Systems (CHI), 2019.
Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. Delayed impact of fair ma- chine learning. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3150–3158, Stock- holmsma ̈ssan, Stockholm Sweden, 10–15 Jul 2018. PMLR.
Arturs Backurs, Piotr Indyk, Krzysztof Onak, Baruch Schieber, Ali Vakilian, and Tal Wagner. Scalable fair clustering. In International Conference on Machine Learning, pages 405–413, 2019.
Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. Fair clustering through fairlets. In Advances in Neural Information Processing Systems, pages 5029–5037, 2017.
Pdf: /pdf/37f2b0195e9983268a48fe190c5c01cdb2164578.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A Robust and Opponent-Aware League Training Method for StarCraft II | Accept (spotlight) | Summary: The paper aims to reduce the computational complexity of league training (the AlphaStar system) for Starcraft II by introducing heuristics. Exploiter agents are used in AlphaStar to target weaknesses in the primary policy (main agent) being trained. The paper proposes to condition the exploiter agents on strategies that are either (1) high win rate or (2) divergent from existing strategies. This provides a more guided set of agents that force the main agent to respond to strong opponents or diverse strategies. Another heuristic aims to improve performance by augmenting the base model with a (frozen) opponent prediction model tasked with predicting opponent state. This is used to design a reward that encourages the model to reduce the entropy in opponent model predictions, which creates "scouting" behavior that is beneficial in Starcraft II. Ablations show the components added improve model performance compared to the baseline AlphaStar and a human study against expert players demonstrates (modest) success over a large number of repeated matches.
Strengths: ## originality
Modest. Augmenting the league training algorithm with more specific strategy conditioning is a welcome addition to techniques for improving the opponents used in league training in a general fashion.
Opponent modeling is not a new task for game playing, but the specific implementation is new (to my knowledge). A scouting reward is an obvious fit to a game like Starcraft II where opponent knowledge is crucial. I look forward to seeing this more widely tested on other hidden information games where scouting may involve stronger trade-offs, like Honor of Kings (meaning: games where scouting comes at a cost compared to other tasks; in Starcraft II most scouting can be done relatively cheaply and in parallel to other activities).
## quality
Good. The human study is very strong evidence of ecological validity and success. Evaluating against 20 matches is a strong criteria and welcome change compared to the previous best of 3 or 5 scenarios.
The results showing AlphaStar exploiters lose efficacy later in training (figure 6) are another example of solid evaluations of the core algorithms. The main method ablations all reflect a clear attention to evaluating the new methods.
## clarity
Modest. The methods and experiments are all explained in detail. Some of the details were difficult to follow and would benefit from alternative explanations (see below). Figures were mostly easy to understand, though some of the text assumes familiarity with details of Starcraft II (like Figure 4 and Table 2) that many readers may not possess.
## significance
Modest. General league training improvements are of value to RL algorithms applied to large strategic spaces. This will be of interest to a meaningful subset of the RL community investigating game playing. The improvements in saved computational time are modest, but clearly present.
Opponent modeling and the scouting reward are more specific to Starcraft II but will likely apply to certain other games. Overall the methods are promising as a default alternative to the baseline league training methods with no clear downsides aside some code complexity to implement the strategy tracking statistics and selection methods.
Weaknesses: The results show modest improvements, but it is not clear how the long-run scaling looks. Does ROA-Star asymptote to the same performance as AlphaStar? Reach higher? Figure 2 suggests a meaningful Elo score improvement by 5 days of training, but it was not clear if AlphaStar would eventually catch up, and if so what that gap looks like.
The win rates against human experts converge to nearly 50%. Is there evidence that the AlphaStar model trained equivalently would do worse?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - [Q1] How does ROA-Star scale?
- How much can the core work be parallelized across more GPUs or agents (to reduce training time)?
- Are there any particular features of scaling that would differ to AlphaStar?
- Would ROA-Star benefit more or less from maintaining multiple concurrent exploiters of a given type?
- [Q2] Please provide pseudo-code for the EIE algorithm and ERE algorithm in the main text.
- The details are a bit murky regarding deviation and how it is used to guide sampling. These are the core new methods being introduced, so they should be very clear from the exposition.
- Particularly given no code will be released, this will substantially improve reproducibility.
- [Q3] Section 5.2
- Consider expanding this analysis more for readers unfamiliar with Starcraft II. What is Figure 4 showing?
- It may help to reorganize Figure 4. Show the two models (AlphaStar with and without opponent modeling) compared for each specific building route. So two colors (with vs without opponent modeling), with a separate box for each strategy transition.
- Or consider a summary metric that measures the rate of changes in build order in general.
- Or perhaps move this mostly to the appendix. It may be sufficient to provide a simple metric showing greater knowledge of opponent state without the specific strategic changes made.
- What are good summary statistics to show tactical changes due to scouting?
- Overall 5.2 was difficult to follow and it's not immediately obvious that the opponent modeling is changing strategy (correctly) more than the baseline.
- [Q4] Figure 6 is important!
- This is an important phenomenon to document and emphasize.
- I've not seen the degraded exploiter performance reported elsewhere. Creating a metric to track and benchmark this performance decay would help others evaluate other league training methods and assess potential improvements.
- [Q5] Figure 7
- Mark the 50% win rate line clearly. This will make it more obvious how well/poorly the agent does.
- Please report the final win rates over 20 matches against each human.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not discuss societal impact of the work. This is not that important given the focus on a game that is well discussed - the societal consequences of this work do not alter the conclusions made about the original AlphaStar work.
Limitations are not addressed with specific details. The paper would benefit from pointing out weaknesses in the method or cases that are not currently addressed by the algorithms. Some example questions: When might the scouting reward harm overall performance? What are the limits of a frozen opponent prediction model? Why is performance so close to 50% against human experts?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your review. We respond to all your questions below, and we are happy to provide further details if there is anything still unclear.
**Weaknesses**: It is not clear how the long-run scaling looks? Is there evidence that the AlphaStar model trained equivalently would do worse?
**Answer**:
We reimplemented AlphaStar and trained it for 10 days in total, where the comparisons with ROA-Star are reported in Section 5.3. Due to limited resources, we stopped training AlphaStar after a thorough evaluation. First, there is an obvious gap in Elo scores between the AlphaStar’ MA and ours (Figure 5). We found that AlphaStar's MA is very fragile when facing certain models in the ROA-Star league, resulting in the worst win rate of 1% (Table 3). Also, we observed a decrease in the effectiveness of its exploiters (Figure 6), and its league training gradually degraded to self-play. We believe that with an extended period of training, it’s unlikely for AlphaStar to catch up with ROA-Star.
For a comparison (see Table 1) between DeepMind's AlphaStar and ROA-Star. Our MA trained for 50 days which consumed about $4.42 * 10^{10}$ steps, and DeepMind's MA trained for 44 days and consumed around $1.9 * 10^{11}$ steps. In repeated games against top humans, our agent maintains a win rate of no less than 50%, which means the top humans didn’t find a winning strategy that can repeatedly defeat our agent. Other than the results shown in Table 1, DeepMind's AlphaStar was easily defeated by Grandmaster players in Battle.net with the Cannon Rush strategy (0 wins and 2 losses, PvP), which is one of the uncommon strategies.
(DeepMind has released the replays of AlphaStar's matches on Battle.net against human players, here is the link https://www.nature.com/articles/s41586-019-1724-z#Sec32.)
**Q1**: How does ROA-Star scale
**A1.1**: More GPUs can support us to increase the batch size of agent training, thereby accelerating the training speed. Specifically, we have tried increasing the number of GPUs for training a single agent from 64 to 128. By utilizing data parallelism, we were able to double the batch size. Although there was an increase in communication time between GPUs during gradient backpropagation, the overall efficiency improved by nearly 30%.
**A1.2**: Actors in AlphaStar require both CPUs and TPUs to generate samples, while ROA-Star's actors only require CPUs. ROA-Star can accelerate sample generation speed by increasing the number of CPUs.
**A1.3**: We haven't conducted any relevant experiments, but we can offer some conjectures. Under the framework of ROA-Star, maintaining multiple concurrent exploiters of a given type would find more weaknesses of the main agent and the entire league, which is beneficial to improve the robustness of the main agent.
**Q2**: Please provide pseudo-code for the EIE algorithm and ERE algorithm in the main text.
**A2**: Thank you for the suggestion. We will add the relevant pseudocode. Here, we provide some details on calculating deviation. As we mentioned in Section 2.2, the statistic $z$ includes the first 20 build order (buildings or units) of a game. During training, MA conditions on a certain $z$ sampled from human replays and generate the actual executed build order, which might deviate from the build order in $z$. To measure the deviation in execution, we calculate the edit distance between the actual executed build order and the target build order once the game is over. For each $z$ in $D$, we maintain its moving average of the execution deviation. We will make this clearer in the revision.
**Q3**: Section 5.2
**A3**: We have reorganized Figure 4 as you suggested, as shown in Figure 2 in the PDF we submitted.
Given StarCraft II is a highly complex game, it is hard to define a metric to summarize all reasonable strategy transitions. Therefore, we let the agent play against four specific opponents and show its distribution of the first two tech buildings, which is a coarse representation of a strategy.
For readers who are unfamiliar with Starcraft II, we have demonstrated that opponent modeling increases the agent's flexibility in responding to different opponents, as well as improving win rates in all situations, as shown in Table 2; For Starcraft II players, it is easy to check that the strategy transitions depicted in Figure 4 are straightforward and reasonable.
**Q4**: Figure 6
**A4**: In the original AlphaStar paper, Extended Data Fig. 8 plots a payoff matrix of league training, which shows a similar phenomenon: the effectiveness of exploiters degrades as training progresses. Although AlphaStar did not further investigate this phenomenon, we believe that improving the effectiveness of the exploiter throughout the entire training process would be beneficial for the league training.
Except for the payoff matrix, a moving average of the win rate for each exploiter defeating the MA is a meaningful metric that indicates the effectiveness of the exploiter, and this metric can be easily obtained during the training process. We will make it clearer in the revision.
**Q5**: Figure 7
**A5**: Thank you for your suggestion. We have refined Figure 7 and show the revision in Figure 3 in the PDF we submitted. We will add a table to report the final win rates in the final version.
**Limitation**: Limitations are not addressed with specific details.
**Answer**:We are willing to discuss the limitations of the frozen opponent prediction model, and we will incorporate this discussion into the revision.
The frozen opponent prediction model preserves the prior knowledge of game strategies that human players consider effective. The diversity of strategies in human data may affect the effectiveness of opponent modeling. In other words, if the agents discover new and reasonable strategies during league training, the effectiveness of opponent modeling might decrease. This is something we need to verify in the future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed replies and revised figures.
- [A1] Great to see the reasonable scaling benefits! I did not realize actors require TPUs in AlphaStar. Knowing ROA-Star can benefit from CPU-only scaling is another benefit to highlight in the paper.
- [A3] This is much clearer than before.
- [A5] Much better! As a minor point I would suggest using a dashed or solid black line across the 50% y-value to make the value clearer. Either way this is much easier to read.
---
Reply to Comment 1.1.1:
Comment: Thank you. We will refine Figure 7 according to your suggestion. | Summary: This paper describes an AlphaStar-like approach for training a top-human-level StarCraft 2 agent, with three core additions/changes to the approach that was used to train AlphaStar:
1. Training exploiters in the league conditioned on certain goals: exploitative exploiters which are conditioned on the $z$ statistics (i.e., unit composition strategies) that perform best for the main agent, and explorative exploiters which are conditioned on under-explored $z$ statistics.
2. An auxiliary loss (used in supervised learning on expert replays phase) to predict what strategy (how many of which unit types) the opponent is using.
3. Reward shaping (used in reinforcement learning phase) that rewards the agent when it improves its own ability (through scouting) to predict what units the opponent has with the part of the model that was trained on the axuiliary loss mentioned above.
The contributions are primarily intended to make the overall agent more robust / less susceptible to unexpected strategies.
Strengths: - Strong empirical results (including against top human experts)
- The writing is clear (barring just a few minor parts) and correct
Weaknesses: ### Main comments
- The paper feels seems to barely fit in a conference submission. Results for several experiments (more than just hyperparameter tuning / extensive ablations) are only very briefly mentioned in the main paper (with results not even summarised, just the existence of the experiment being mentioned), with results+discussion only present in supplementary material.
- The paper mentions AlphaStar's computational costs being prohibitive, and it is probably true that this paper used a bit less.... but, frankly, the hardware used for this paper is **still** extremely much.
### Detailed comments
- Table 1 does not mention for how long each agent was trained. This was 50 days for ROA-Star, but (I think?) less for AlphaStar. I'm not sure by how much. I imagine that, even if AlphaStar took much less time, the 256 TPUs would still be more expensive than the 64 GPUs... but anyway, this is important information to include here.
- Line 58: Last but no least --> Last but not least
- Lines 114-115: this is a bit vague / overly general. "does not respond [to] the opponent['s] real-time strategy" --> what does it mean to "respond" to a strategy? Surely the agent plays, it must be responding somehow. What is the difference between "real-time strategy" and any other form of "strategy"?
- Line 123: More specially --> More specifically
- Lines 139, 140, 275: z (without mathmode) --> $z$
- Line 144: In instead --> Instead
- Line 174: "there has been evidence showing that" --> this needs a reference
- Line 288: "Similarly" --> similar to what?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you provide any clarification in particular on the point I described above about Table 1?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your review. We respond to your questions below.
**Q1**: Can you provide any clarification in particular on the point I described above about Table 1?
**A1**: Yes. The agents of AlphaStar were trained for 44 days. With the scale of our computational resources, an agent can process about 11,000 environment steps per second. After 50 days of training, each of our agents (MA, ME, and 2 LEs) consumed around $4.42 * 10^{10}$ steps. By comparison, each agent of AlphaStar can process about 50,000 environment steps per second and consumed around $1.9 * 10^{11}$ steps during the entire training process. We will make this clearer in the revision.
**Q2**: Writing issues.
**A2**: Thanks very much for your suggestion. We will edit the paper according to your advice.
**Q3**: Lines 114-115: this is a bit vague / overly general. "does not respond [to] the opponent['s] real-time strategy" --> what does it mean to "respond" to a strategy? Surely the agent plays, it must be responding somehow. What is the difference between "real-time strategy" and any other form of "strategy"?
**A3**: Responding to a opponent strategy means (in our statement) adjusting one's strategy (by building tech buildings, training military units, and researching new technologies) in order to gain a strategic advantage. For example, if it is discovered that the opponent has produced stealth units (like Dark Templars), one should produce anti-stealth units (like Observers). Opponent real-time strategy refers to the opponent's current strategy, rather than the opponent's historical strategy in early times. We will make this clearer in the revision.
**Q4**: Line 174: "there has been evidence showing that" --> this needs a reference
**A4**: We observed this phenomenon in the human evaluation replays of AlphaStar released by DeepMind ( https://www.nature.com/articles/s41586-019-1724-z#Sec32). It is relatively easy to observe that AlphaStar rarely conducts effective scouting and lacks knowledge of the opponent's real-time strategy, making it fragile to uncommon strategies like the Cannon Rush strategy in PvP matches (as shown in our supplementary video). We will add this reference in the revision.
**Q5**: Line 288: "Similarly" --> similar to what?
**A5**: Similar to the evaluation method used in section 5.1. We will make this clearer in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks. This message is just to confirm that I have read your rebuttal. I have no further specific questions at this time.
---
Reply to Comment 1.1.1:
Comment: Many thanks for your thorough review and suggestions. | Summary: This paper introduces ROA-Star, an improvement to the AlphaStar training framework for StarCraft II. ROA-Star addresses two identified issues in AlphaStar: diminishing efficiency of exploiters as the training progresses and the Main Agent's inability to adapt to opponent strategies in real time. As a solution to the first problem, it incorporates goal-conditioned reinforcement learning with two types of exploiters - Exploitative Exploiters (EIE) and Explorative Exploiters (ERE). EIE fine-tunes micro-level operations targeting high win-rate strategies, while ERE uncovers macro-level strategy weaknesses. ROA-Star also utilizes opponent modeling through a probabilistic encoder-decoder, and includes a "scouting" reward to encourage agents to actively collect information on the opponent. Experiments demonstrate that ROA-Star enhances robustness, adaptability, and performance compared to AlphaStar.
Strengths: The paper proposes a new approach to league-based training in complex imperfect-information zero-sum games. The authors elect to apply their approach to StarCraft II, which is arguably one of the most complex environments in this domain. They offer clear comparisons against the state-of-the-art AlphaStar solution, demonstrating that their method outperforms the baseline across various metrics, including learning speed, computational resources needed, and robustness of the final agent. Compelling evidence of their approach's superiority is provided through comparisons of their agent against AlphaStar and human players in a more challenging setup than what AlphaStar employed, involving multiple games allowing humans to exploit any weaknesses in the agent. This is a particularly strong paper.
Weaknesses: This paper is a really strong contribution, but it could be further strengthened by refining the writing. Specific examples include:
line 144: The word 'In' is superfluous.
line 171: 'seen' should be replaced with 'observed'.
line 260: 'Besides' should be replaced with 'Additionally'.
line 295: 'ELo' should be corrected to 'Elo'.
lines 113-114-115 need to be rewritten for clarity and coherence.
Addressing these linguistic issues would enhance the paper's readability and presentation.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Do the authors have any intentions to release the game replays involving matches against human professionals? Additionally, are there plans to make the developed agent accessible to the public? These aspects could offer valuable insights and opportunities for further research and community engagement.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors acknowledge that their approach relies on certain StarCraft-specific assumptions. It would be really interesting to see how this method performs in different domains, including simpler environments such as Stratego. This extension could provide a more comprehensive understanding of the approach's applicability and limitations beyond StarCraft.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your review. We respond to your questions below.
**Q1** (weaknesses): This paper could be further strengthened by refining the writing.
**A1**: Thanks very much for your suggestion. We will edit the paper according to your advice.
**Q2** (Questions): Do the authors have any intentions to release the game replays involving matches against human professionals?
**A2**: Thank you for your interest. We have originally included all the game replays against human professionals in the supplementary materials under the folder named 'replay'. We will make this clearer in the paper.
**Q3** (Questions): Are there plans to make the developed agent accessible to the public?
**A3**: We are willing to make it public, but unfortunately, it is currently technically infeasible. Blizzard released StarCraft API in https://github.com/Blizzard/s2client-proto with the highest supported game version of 4.10.0, which is the version we trained our agent on. But the current version of its client has been updated to 5.0.11. It requires configuring the player's local StarCraft II client to be able to play with our agent on an earlier version. For the same reason, we cannot release our agent on Blizzard's online matching system Battle.net like AlphaStar did, as the players play on the latest game version.
**Q4** (limitation): It would be really interesting to see how this method performs in different domains, including simpler environments such as Stratego.
**A4**: One direction of our future work is to apply ROA-Star to other large-scale two-player imperfect-information games. Compared to small games, ROA-Star is more likely to offer superior performance in large games with high complexity. This is because ROA-Star can be viewed as a trade-off between efficiency and theoretical guarantees. For smaller games (compared to StarCraft II), there have been extensive methods with strong theoretical convergence guarantee to Nash Equilibrium, e.g., PSRO, DeepNash, CFR, etc. | Summary: The authors present a modification to the AlphaStar training algorithm that increases robustness and allows for faster responses to opponent behaviour by explicitly encoding the opponent's strategy in the model's representation along with adding scouting as an ancillary objective.
Strengths: The method exceeds the SOTA in starcraft and should generalize to other tasks
They present ablation that demonstrate the individual improvements of each contribution
Weaknesses: There are limited implementation details provided and the authors do not provide a code release.
The authors describe the model as "opponent aware", but this is a hypothesis, they do not investigate if the features they add are adding to "awareness" just if they improve the winrate, the models could just be learning a more robust set of strategies.
Adding scouting as an additional reward objective appears not to have any grounding beyond the observation that the previous models did not have a strong enough bias towards scouting
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Can you provide summary statistics or a full release of the self-play training data? 50 days does not clearly explain how many steps/games were needed to train each model
Did the authors attempt to examine the opponent model latent space?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The use of human subjects without IRB is concerning
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your review. We respond to your questions in the following.
**Q1**: Did the authors attempt to examine the opponent model latent space?
**A1**: Thank you for your suggestion. To show the agent's 'awareness' of the opponent's strategy, we add an experiment to visualize the latent space of the opponent prediction model when the agent fights against four different opponents, as shown in Figure 1.b in the PDF we submitted. From left to right, we offer the distribution of $h_t$ (the latent opponent strategy embedding) at different game stages: at the 30th second of the game, all the opponents only produce some Probes (the workers) and can’t be distinguished. In the 3rd minute, the opponents build the first tech building and choose different tech routes, but they still have similar early military units (like Stalkers), so there are overlaps in the hidden space. In the 6th minute, the opponents produced different units and developed different technologies, resulting in a distinct distribution in the hidden space. As a comparison, Figure 1.a visualizes the latent space of the policy network in AlphaStar. When facing different opponents, there is no obvious distinguishability in the distribution of hidden states.
**Q2**: Adding scouting as an additional reward objective appears not to have any grounding beyond the observation that the previous models did not have a strong enough bias towards scouting.
**A2**: Scouting is essential in RTS games because of the existence of “fog of war”. A consensus among StarCraft II human players is that scouting is indispensable to detect the opponent's real-time strategy and enable proper strategic responses, which has a large impact on the prob of winning a game. Also, effective scouting is not easy: it is important to decide when and where to scout, which is difficult to write hard rules for. We design an effective and general-purpose scouting reward based on the decrease in cross-entropy between the true opponent strategy and the opponent model's predictions. We believe our scouting reward scheme can be easily applied to similar partially observable games. We will make this clearer in the revision.
**Q3**: Can you provide summary statistics or a full release of the self-play training data? 50 days does not clearly explain how many steps/games were needed to train each model.
**A3**: Yes, here we present the summary of the training process of our agents, where the basic settings of each agent can be found in appendix A.3. With the scale of our computational resources, an agent can process about 11,000 environment steps per second. Our main agent was continuously trained for 50 days, which consumed about $4.42 * 10^{10}$ steps. In comparison, AlphaStar's main agent was continuously trained for 44 days and consumed around $1.9 * 10^{11}$ steps (around 50,000 environment steps per second). During training, our main agent added a frozen copy to the league every $2 * 10^8$ steps, resulting in a total of 221 models. Our ME reset its parameters after $4 * 10^8$ steps at most, which generated 113 models in total. Our LE reset its parameters after $2 * 10^8$ steps at most, and the two concurrent-training LEs generated 216 and 218 models respectively. These details will be added to the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for putting in the time to run the visualization, think it helps strengthen your explanation of the model's performance.
I agree that scouting is important in RTS games, and that improving scouting is a noteworthy result in and of itself. I hope that your revisions make it clear that this was the motivation of your changes to the original algorithm.
The additional details will be helpful in further extensions of your work, thank you for putting in the time to collect them.
The responses have not changed my position, and I have no followup questions.
---
Reply to Comment 1.1.1:
Comment: It is our pleasure, and thanks very much for your constructive and thorough review. We will refine the paper according to your suggestions. | Rebuttal 1:
Rebuttal: We have submitted a PDF that includes three figures. In response to reviewer sqN1's question, we demonstrate the agent's 'awareness' of the opponent's strategy in Figure 1 (in the new pdf). Additionally, based on the suggestions from reviewer eEEJ, we have refined the original Figure 4 and Figure 7, and plotted Figure 2 (in the new pdf) and Figure 3 (in the new pdf).
Pdf: /pdf/d94a53a84d7d250a3e19246e571bff8218b7f17d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On the Power of SVD in the Stochastic Block Model | Accept (poster) | Summary: This paper studies the power of vanilla-SVD algorithm, algorithm without any pre-processing or post-trimming steps, in the symmetric stochastic block model and proves it recovers all clusters in the balanced case, which answers an open question in [Vu18].
Strengths: The main contribution of this work lies in its demonstration of the effectiveness of a truly vanilla SVD algorithm in recovering all clusters of the symmetric stochastic block model in the balanced case with high probability theoretically. Some noticeable distinctions from existing works include a truly vanilla SVD algorithm without any additional pre-processing steps and the ability of handling of cluster numbers $k = \omega(1)$. Also, from a technical standpoint, the authors introduce a novel "polynomial approximation + entrywise analysis" approach, which simplifies the analysis of eigenspace perturbation by reducing it to the analysis of a simple polynomial under perturbation thus makes the analysis more robust and requires less structure.
Weaknesses: The presentation of this work could be improved. First, it might be better to explicitly mention in the Abstract and Section 1that the analysis is within the balanced case and not directly extended to other cases. Second, [MZ22b] also claims answering the open question in [Vu18] in the balanced case. The authors point out that the centered-SVD algorithm proposed in [MZ22b] is not truly vanilla, containing a pre-processing centering step which depends on the knowledge of $q$. But it would still be beneficial to compare the condition and bounds in [MZ22b] to have a more complete understanding. Third, adding more background information and necessary theoretical knowledge would make this work more accessible to a broader audience. Lastly, the analysis in Section 4 may be better positioned prior to Section 3.3, which already wraps up the entire analysis, or included within the supplementary material.
Others:
- In Lemma 3.1, it should be $1 - n^{-3}$ rather than $1- n^3$.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: This work argues that its analysis can handle the case of $k = \omega(1)$ while [MZ22b] considers a stronger case of $k = \omega(\log n)$. Can the analysis in this work also handle the case of $k = \omega(\log n)$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: There seems to be no discussion of limitations or potential societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank this reviewer for the suggestions on improving our presentation. We will improve our writing accordingly.
**Regarding the question about $k=\omega(\log n)$:**
Yes, our proof works perfectly in this case. As long as the parameters $n,p,q,k$ satisfy the requirement in Theorem 1, our algorithm is able to recover the clusters. For example, if $p=0.51, q=0.49, k = n^{0.1}$, our algorithm can recover all clusters.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I maintain my acceptance recommendation.
---
Reply to Comment 1.1.1:
Comment: Thank you for reviewing our response and maintaining your acceptance recommendation. We appreciate your evaluation and support :) | Summary: In order to understand the behavior of spectral steps in clustering problems, this paper studies the power of vanilla-SVD algorithm in the SBM. This work shows that vanilla-SVD algorithm recovers all clusters correctly in the symmetric setting.
Strengths: 1. To theoretically understand the power of practically successful vanilla spectral algorithms, this paper studies SBM as a preliminary demonstration. Then, some proofs are presented for the proposed Theorem 1.1.
2. This work shows that vanilla algorithms is indeed a clustering algorithm in SSBM for a wide range of parameters.
3. The authors give another analysis on matrix perturbation with random noise.
4. Detailed comparisons with existing analysis for vanilla spectral algorithms in SBM are presented in this paper.
Weaknesses: 1. In real applications, it is common in practice that spectral embeddings obtained by spectral clustering algorithm need a post-processing, e.g., k-means. It is reasonable that spectral embeddings themselves are clustering-friendly. Thus, the contributions of analyzing vanilla spectral methods could be highlighted.
2. From Section 1.3, it can be observed that there is limited work on the vanilla spectral algorithm. Could the authors provide an explanation for the reasons behind analyzing the parameters in the SSBM setting in this paper?
3. From the comparisons, it is apparent that there are some issues with the related theoretical analyses, such as [AFWZ20], [EBW18], [PPV+19], etc. Since these weaknesses are evident, could the authors consider providing quantitative experimental analysis in addition to the theoretical analysis?
4. Between lines 48 and 49, '...can be view as a fixed matrix...' should be corrected to '...viewed...'.
5. The coherence in the review content in the Introduction section is insufficient. The authors seem to focus more on the mathematical advancements rather than considering the theoretical and practical applications of spectral methods in the field of machine learning, especially the related work in both theoretical understandings and practical applications.
6. The argument is verified using a specific setting in a typical model, SSBM, to demonstrate that vanilla spectral algorithm is powerful and prectically successful. It may not be comprehensive enough.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Experiments are suggested to conduct from the perspectives of practical applications.
2. Considering this paper as purely theoretical work, then based on Weaknesses #3, it is recommended to provide quantitative experimental analysis to supplement and demonstrate the limitations of the theoretical analysis of the comparison methods.
3. The arguments of this paper need verifying by more settings, not only in symmetric version of SBM.
4. It is recommended to use reference citation format that complies with the guidelines specified by this conference. Additionally, it would be beneficial to gain a better understanding of the contributions of spectral methods in the field of machine learning or data mining.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for pointing out the typo and your suggestion on the citation format; we shall certainly improve our paper accordingly.
**Regarding experiments (Weakness #3 and Question #1, #2):**
We study the same algorithm (SVD) as in [AFWZ20], [EBW18] [PPV+19] and we give better analysis. Since it is the same algorithm, we are not sure how to run quantitative experimental analysis for comparison. Please let us know if you have any suggestion. For practical applications, the fact that SVD is widely-used in machine learning and data mining has already validated the effectiveness of the algorithm, so we only focus on theoretical analysis.
**Regarding other issues:**
- **Weakness #1.** Even if post-processing, say K-means, is used in practice. There are a lot of practical observations that, in some scenarios, vanilla spectral algorithm + K-means is much better than K-means itself. (There are many references. If you want, we can provide some of them.) The main reason is that vanilla spectral algorithms can remove noise in these applications. This phenomenon is exactly what we have analyzed in the SSBM model. We think this is also a good contribution.
- **Weakness #2 #6 & Question #3.** SBM is a well-known benchmark for clustering tasks. SSBM is indeed a further simplification, but it is a good starting point. Analyzing parameter settings in SSBM could help understand under what conditions SVD works well theoretically. As you suggested in Weakness #6 and Question #3, we expect future works to analyze more general settings. But even for SSBM, it is highly non-trivial. This problem has been open for several years, and some great mathematicians, such as the authors of [AFWZ20], [EBW18],[Vu18], [PPV+19] didn’t solve it.
- **Weakness #5.** We will definitely improve our introduction part to highlight the relevance to machine learning and append more related works.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' reply. I have increased the score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your willingness to reconsider based on our rebuttal. Your feedback has been invaluable to us. | Summary: This paper investigates the effectiveness of the vanilla-SVD algorithm in the stochastic block model (SBM) and demonstrates that it can accurately recover all clusters in the symmetric setting. The authors address an open question raised by Van Vu in the symmetric setting.
Strengths: 1. The vanilla algorithms employed in this study are applicable to a wide range of parameters, surpassing previous limitations that only allowed analysis on a constant number of clusters.
2. They also provide a novel analysis on matrix perturbation with random noise.
Weaknesses: 1. The paper lacks experimental evaluation to demonstrate the effectiveness of the proposed scheme.
2. The time and space complexity analysis are not compared with previous schemes, which could provide insights into the efficiency of the proposed approach.
3. Lots of proofs are omitted in the main paper (I understand that this is due to the space limit). Maybe the theory conferences like COLT is a better venue for this paper?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. What advantages does the SVD approach in this paper offer compared to the terminal version of the Johnson-Lindenstrauss Lemma mentioned in ``Optimal terminal dimensionality reduction in Euclidean space''?
2. Is the proposed scheme efficient for large $n$ and $k$? If $n$ and $k$ are large, can the algorithm handle them efficiently?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Overall, this paper provides some valuable contributions to the study of the vanilla-SVD algorithm in the SBM. However, addressing the weaknesses mentioned above, such as conducting experiments, comparing complexities, and improving readability, would enhance the quality and impact of the research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank this reviewer for useful comments and questions.
**Regarding weakness:**
We did not propose a new scheme. Instead, we provided a theoretical analysis for a very popular algorithm — SVD (singular value decomposition). Given this algorithm has already been widely used in practice. We did not run experiments further to demonstrate the effectiveness.
The time and space complexity of our algorithm is simply the same as the time and space complexity of SVD.
All proofs are included in the supplementary. Even though the main paper has a page limitation, we will prepare an improved full version in arXiv once this paper has been accepted. COLT is a good conference, but we also love NeurIPS :-)
**Regarding questions:**
**Comparison to JL projections.** JL is also a good dimensionality reduction algorithm. However, it only preserves $\ell_2$ norm distance after projection. By contrast, SVD can do something more, namely, SVD also filters the noise. In fact, this is the main reason why SVD can recover clusters in SBM, but JL projection can not.
**Efficiency when $k$ and $n$ are large.** The main part of our algorithm is to find the top $k$ eigenvectors of a $n\times n$ matrix. This can be implemented within $O(k n^2)$ time and $O(n^2)$ space.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The rebuttal addresses some of my concerns. I am willing to increase my grade slightly.
---
Reply to Comment 1.1.1:
Comment: Thank you for considering our rebuttal and re-evaluating our submission. We genuinely appreciate your feedback! | Summary: The manuscript mention that the paper contributes by providing a theoretical understanding of the power of vanilla spectral algorithms in clustering problems, specifically in the stochastic block model (SBM). It also presents a novel analysis of matrix perturbation with random noise. These contributions suggest that the document offers new insights into the application of SVD in clustering problems, particularly in the context of SBM. It focuses on the stochastic block model (SBM), a benchmark for clustering, and treats it as a form of vector clustering. The manuscript proposes and analyzes a vanilla-SVD algorithm for graph clustering and demonstrates its effectiveness in SBM.
Strengths: The main topic of the manuscript is the analysis of the power of Singular Value Decomposition (SVD) in clustering problems. The manuscript discusses the use of dimensionality reduction techniques, specifically PCA and SVD, to improve clustering results in high-dimensional datasets. It explains that classical clustering algorithms like K-means may perform poorly in such datasets due to the curse of dimensionality. Spectral methods like PCA and SVD have been observed to significantly enhance clustering results. The manuscript explores the reasons behind this improvement, including filtering noise from high-dimensional data. It focuses on the stochastic block model (SBM), a benchmark for clustering, and treats it as a form of vector clustering. The manuscript proposes and analyzes a vanilla-SVD algorithm for graph clustering and demonstrates its effectiveness in SBM. The authors present their results, including a clustering algorithm and an analysis of matrix perturbation with random noise. They compare their approach with existing analysis methods and highlight the advantages of their approach. The document concludes with a proof outline and technical contributions, outlining the steps taken to analyze the power of vanilla spectral algorithms for clustering.
Weaknesses: There are many formulas that make the readability not very strong. Some experiments should be designed to evaluate the significance of the model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1] The paper focuses on the power of the vanilla-SVD algorithm in the stochastic block model (SBM). It would be interesting to know if the algorithm's effectiveness is limited to the SBM or if it can be applied to other clustering problems as well. Is there any generalization of its performance to different datasets?
2] The authors mention that the lack of theoretical analysis for vanilla spectral algorithms is due to technical obstacles. Could you elaborate on these obstacles and explain why the theoretical analysis is challenging? What are the main difficulties in analyzing vanilla spectral algorithms?
3] The authors state that the lack of theoretical analysis is partly due to the simplicity of vanilla spectral algorithms, which are not specifically designed for theoretical models like SBM. How do these algorithms compare to more complicated algorithms designed for SBM in terms of performance and practicality? Are there any trade-offs between simplicity and performance?
4] The authors mention the potential application of vanilla spectral algorithms in real-world data, but there is no analysis or discussion on this topic. Can you provide any insights into the behavior of these algorithms on real data? How do they perform in different applications? Are there any challenges or limitations when applying vanilla spectral algorithms in practice?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This work should design some experiments to fully illustrate the application value of the model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate this review’s feedback and questions. We answer them below.
**Regarding the weakness:**
Since this is a theoretical paper, some formulas are necessary. However, we will try our best to make this paper accessible to most readers. In fact, results and proofs in random matrix theory are often mathematically intensive. Compared to previous papers, our analysis is quite concise. For example, the paper we cited [AFWZ20] (https://arxiv.org/pdf/1709.09565.pdf) has 58 pages and way more formulas.
The SBM model is a well-known benchmark for graph clustering problems (according to wikipedia, https://en.wikipedia.org/wiki/Stochastic_block_model). So we didn’t run experiments to validate this model.
**Regarding the questions:**
1] Regarding the algorithm's effectiveness in other clustering problems.
This paper focuses on vanilla spectral algorithms — specifically SVD (singular value decomposition) algorithm, which is already a very popular algorithm in practice. We did not propose a new algorithm. Instead, we provided new theoretical insight into why this widely-used algorithm is successful.
2] On the obstacles of theoretical analysis for vanilla spectral algorithms.
We have discussed on major reasons in our paper. One main reason is that many previous analyses used a key component called Davis-Kahan theorem. Davis-Kahan theorem is suboptimal in analyzing vanilla spectral algorithms in SBM (please see Page 4 of our paper). In fact, such obstacles have also been discussed in previous papers. The reason that we can analyze SVD is that we fully avoid Davis-Kahan theorem. We believe this is a highly non-trivial step since a majority of papers in random matrix theory used Davis-Kahan (or its variants) to analyze matrix perturbations. Our new approach has a potential to give more applications in different settings.
3] Compared to more complicated algorithms designed for SBM in terms of performance and practicality.
These complicated algorithms perform well in SBM. However, they were not popular in practice because their performance on real datasets is no better (usually worse) than vanilla ones. In contrast, we analyzed vanilla spectral algorithms such as PCA and SVD which are successful in practice.
4] Application of vanilla spectral algorithms in real-world data.
Vanilla algorithms such as PCA and SVD (the algorithm we analyzed in this paper) are widely used in practice. There are many papers/blogs which have discussed their performance in all kinds of real scenarios. Given this reason, we didn’t repeat these discussions. Instead, we focus on the performance of SVD in theoretical models.
Overall, thanks again for these comments and questions :-) | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper provides rigorous, theory-based evidence that vanilla spectral algorithms (i.e., methods that run SVD on the adjacency matrix without any further processing) succeed in finding many communities in symmetric stochastic block models. In contrast to Davis-Kahan approaches, the authors adopt an analysis similar to [MZ22b], which is inspired by power iteration. The key technical novelty of the authors' method is a new way to study eigenspace perturbation by using a polynomial approximation of the operator that projects a vector onto the space of the first k eigenvectors of the adjacency matrix.
Strengths: While there are many community recovery algorithms for SBMs at this point, this paper provides an important analysis of vanilla algorithms for community recovery, showing that pre or post processing steps are unnecessary, thereby validating practical approaches. This is the key strength of the paper, in my view.
A secondary strength, which is of significance to researchers working on random matrix theory, is the use of the polynomial approximation method for the projection operator. This interestingly allows one to circumvent usage of the Davis-Kahan theorem (which is the standard way to handle eigenspace perturbation), and I expect this technique will be useful in various other settings.
The paper is also very well written, with clear comparisons to related work, and key contributions highlighted.
Weaknesses: The main weakness I found was that there is little to no discussion of the condition under which Algorithm 1 recovers communities (Theorem 1.1). At face value, it seems like a similar condition as [Vu18], with a few changes, like $\sigma \sqrt{k}$ replaced by $\sqrt{kp} \log^6 n$. A few questions are:
- Is this change an artifact of the proof, or is it something more fundamental? In most situations, I'd expect the $\sqrt{\log n}$ term to dominate.
- What are commonly considered regimes for p, q, k in which this threshold is satisfied? For instance, it seems like the regime $p, q = \Theta ( \log (n) / n)$ is not covered by the theorem. What's the minimum scaling of $p, q$ that would work here? And what is the maximum number of communities that can be tolerated? Understanding such qualitative cases would make your results much more interpretable. In a similar vein, it was unclear to me whether the condition you have here is optimal in some sense, or if it's just to prove that vanilla methods succeed in just *some* regime.
Finally, I have a question related to how your work compares to [MZ22b].
- It seems that your work and [MZ22b] tackle a similar parameter regime for community recovery. How do the achievability regions compare between their work and yours? In particular, does the vanilla algorithm work in almost the same parameter regime, or is the parameter regime more restricted?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - I would mention in the abstract that your main contribution is on vanilla algorithms with a large number of communities. This would emphasize the difference between your work and that of [AFWZ20].
- What should the value of $\Delta$ be in Algorithm 1?
- If there is space, could you elaborate on the connections between your polynomial approach and the power iteration method of [MZ22b]? From a quick read of [MZ22b] it seems there may be some similar ideas and it would be valuable for readers in the field to understand those connections further.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank this reviewer for their appreciation of our work, useful suggestions and relevant questions.
**Regarding weakness:**
We believe the extra $\log^6 n$ factor can be improved by future works. This term stems from the concentration inequality we used as a black box. A refined analysis of this concentration inequality may remove this factor. However, as you also observed, the $\sqrt{\log n}$ factor seems more fundamental in that it comes from Prop. 2.4.
We can handle the case where $q = \Theta(\log n / n)$. As you observed, the difference between our paper and Vu’s paper is to replace $\sigma$ by $\sqrt{p} \cdot \log^6 n$. If $p < 0.9$, $\sigma$ and $\sqrt{p}$ are equal up to constant, so our bound is essentially the same as [Vu18] except for the $\log^6 n$ factor. Here are some example setting of parameters:
- In the regime $q = \Theta(\log n / n)$, $p = \Omega(\frac{k \log^6 n}{\sqrt{n}})$ would be sufficient.
- In the regime $p - q = \Theta(1)$, we can recover up to $k = O\left(\frac{\sqrt{n}}{\log^6 n}\right)$ clusters.
Compared to [MZ22b], we have an extra $\sqrt{\log n}$ factor because we use Prop. 2.4 as did in [Vu18].
Thanks for your suggestion. We will add more explanations in our paper.
---
**Regarding questions:**
Thanks for the suggestion! We will improve our abstract accordingly.
We can choose $\Delta := 0.8(p − q) n / k$ (please see Section 3.3).
To make their analysis work, [MZ22b]’s power iteration requires the matrix to be nice, i.e., all large eigenvectors are very close. Therefore, in SSBM, they first ``shift’’ the adjacency matrix by considering $G':= G - q\boldsymbol{1}\boldsymbol{1}^\top$. Only the shifted matrix $G’$ has this nice structure and the original $G$ doesn’t have. This shifting step makes analysis simpler at the cost of not being vanilla.
In contrast, we introduced a polynomial $\psi$ and analyzed the eigenspace of $G$ through the power iteration of $\psi(G)$, which is our main novelty. There are two benefits of using this polynomial: (i) $\psi(G)$ has a nice structure; (ii) $\psi$ only appears in our analysis, keeping our algorithm vanilla. | null | null | null | null | null | null |
Replicable Reinforcement Learning | Accept (poster) | Summary:
This paper discusses the development of algorithm frameworks for replicability, which is a response to the replicability crisis in social, behavioral, and data sciences. The paper introduces provably replicable algorithms for machine learning and statistics, including replication results for control problems, which pose different challenges than batch learning settings. The paper provides a provably replicable algorithm for parallel value iteration and a provably replicable version of R-Max in the episodic setting, which are the first formal replicability results for control problems.
Strengths: 1. The proposed method has a strong mathematical and theoretical analysis, which is convincing.
2. The researched field appears to be valuable and interesting.
3. The writing is clear and easy to understand.
Weaknesses: Although the method has a strong mathematical and theoretical analysis, it would be better to have more sufficient experiments in the experimental section.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can the corresponding analysis continue to be maintained in the setting of deep networks?
2. How much additional cost is required for the proposed method?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Although the method has a strong mathematical and theoretical analysis, it would be better to have more sufficient experiments in the experimental section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer vrU3,
Thank you for your feedback. We are glad that you find our work convincing and valuable.
In response to the comments in the Weaknesses and Questions sections:
“Although the method has a strong mathematical and theoretical analysis, it would be better to have more sufficient experiments in the experimental section.”
* Our algorithms are tabular reinforcement learning algorithms and we evaluate our algorithms on a tabular MDP in section 5. The experiments we conduct are explicitly designed with two goals in mind: a proof of concept for replicable algorithms in a non-trivial environment (i.e., an environment where there is more than one optimal policy, so that replicability is not already implied by optimality) and to show that the sample complexity overhead of replicability is not as onerous as might be suggested from the asymptotic upper-bounds. Our experiments cover both points. That being said, if there are suggestions for experiments that might improve our understanding of replicability in practice, we would be happy to run them!
“Can the corresponding analysis continue to be maintained in the setting of deep networks?”
* The proposed techniques are in their current form not applicable to non-linear function approximation. Even linear function approximation would require the development of completely new tools for replicability. We agree that this is an interesting direction for future work but there are several leaps to make before we can guarantee replicability of deep learning approaches.
"How much additional cost is required for the proposed method?”
* The overhead on the methods is on the order $|S|^2|A|^2 / \rho^2$ for rPVI (see L173) and $|S|^5|A|^6H^6 /(\epsilon^2 \rho^2)$ for RepRMax (we will add this to the appendix)
---
Rebuttal Comment 1.1:
Title: Keep Score
Comment: I still find this paper interesting at the moment, so I keep my score. | Summary: This paper studies an important topic of RL replicability. Under some assumptions, this work gives definitions of rho-replicable and proposes two algorithms: Rep-PVI and Rep-RMAX, and shows their reproducibility properties through proof.
Strengths: Overall, the development of the paper is smooth, the topic is important, the perspective is novel in the RL literature, and the analytical and empirical evidence seems supportive of the claims.
Weaknesses: Please see the questions section below
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Finite state space and known deterministic reward function seem to be too strong assumptions. The stochasticity comes from 1. exploration and 2. environment stochasticity (including dynamics and reward). If the reward is known and deterministic, a planning algorithm can always be applied and RL is not necessary.
With the parallel sampling definition, the parallel sampling subroutine PS(G) actually turns the transition distribution into a deterministic kernel, this is also a strong assumption. To this end, all stochasticity sources are canceled out in your study.
I may understand it wrongly, but with those assumptions, I wonder if the basic Q-learning algorithm also enjoys the property of convergence to the optimal policy with probability 1. This lead to another question of why the PVI and RMAX algorithms are selected to build the method upon.
For the experiment section, have the authors tried to experiment on some standard RL environment? Inclusively the classical navigation tasks (but are more standard ones) so that the trade-offs between replicability and the performance can have a clearer exposition?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The generality of the proposed method, which also affects the importance of their contribution to the field.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer u4VH,
Thank you for your feedback! We are glad that you find the topic of replicable RL important and novel.
In response to the comments in the Questions section:
“I may understand it wrongly, but with those assumptions, I wonder if the basic Q-learning algorithm also enjoys the property of convergence to the optimal policy with probability 1. This lead to another question of why the PVI and RMAX algorithms are selected to build the method upon.”
* Neither Phased Value Iteration nor Q-learning converge to an optimal policy with probability 1 in the stochastic (PAC) setting we consider since we always have to account for the sampling failure probability. Classical Phased Value Iteration as well as versions of Q-learning are $\varepsilon$-optimal with probability $1-\delta$ under certain coverage assumptions. However, a key point that is missing in this consideration is that there may be more than one $\varepsilon$-optimal policy. As such, neither of these algorithms are replicable, which is the property we study in this work.
* We start at the very beginning of sample complexity analysis and use two of the early algorithms that already prove to be quite challenging in the setting we select. Phased Value Iteration (sometimes Phased Q-Learning) - as the name suggests - is an algorithm that effectively does Value Iteration (repeatedly) when having access to sampled transitions rather than the full transition model. This is the first step of introducing stochasticity to dynamic programming-based algorithms such as classical Value Iteration and it already bears a significant overhead for replicability, and necessitates new techniques not previously appearing in the RL literature.
The second setting we consider is that of episodic exploration and, here, an initial class of algorithms to start with is the $E^3$ or RMax style algorithms. These algorithms contain a natural notion of previously explored states via their sets of known states. We use the sets of known states as a proxy for what parts of the space have been explored and try to replicably have the same sequence of states be added to K. This choice is made to create a way of measuring and comparing exploration across two runs.
“Finite state space and known deterministic reward function seem to be too strong assumptions. The stochasticity comes from 1. exploration and 2. environment stochasticity (including dynamics and reward). If the reward is known and deterministic, a planning algorithm can always be applied and RL is not necessary.”
* Finite state spaces and deterministic reward functions are very common assumptions in the RL literature and the base problem considering these two has only very recently been considered close to solved (Azar et al., 2017; Zanette and Brunskill, 2019; Simchowitz and Jamieson, 2019). Control problems per se are hard and making these assumptions allows us to study them in parts. As mentioned in the text, our algorithms can easily be adjusted to handle stochastic rewards (see L54f) since estimating them on top of transitions or values is just additive overhead. We agree that planning and RL are connected but both warrant their individual study to make scientific progress.
“With the parallel sampling definition, the parallel sampling subroutine PS(G) actually turns the transition distribution into a deterministic kernel, this is also a strong assumption. To this end, all stochasticity sources are canceled out in your study.”
* This is incorrect and the parallel sampling subroutine does **not** automatically give us a kernel. This would only be true in the limit of $n$ where $n$ is the number of samples for every individual state. However, sample complexity bounds study the lowest number of samples we can use to achieve our goal which does not require full estimation of the kernel itself (see Kearns et al. 1998). The challenge for the first algorithm is in fact to obtain a replicable policy under the uncertainty of sampling which is not canceled out. If this were true, standard algorithms like Phased Value Iteration would provide replicable policies but they do not (see Figure 1 for counter-example).
“For the experiment section, have the authors tried to experiment on some standard RL environment? Inclusively the classical navigation tasks (but are more standard ones) so that the trade-offs between replicability and the performance can have a clearer exposition?”
* Our algorithms are tabular RL algorithms and we evaluate them on a tabular MDP in section 5. The experiments we conduct are explicitly designed with two goals in mind: a proof of concept for replicable algorithms in a non-trivial environment (i.e., an environment where there is more than one optimal policy, so that replicability is not already implied by optimality) and to show that the sample complexity overhead of replicability is not as onerous as might be suggested from the asymptotic upper-bounds. Our experiments cover both points.
Tabular RL algorithms do not generally scale well with respect to the size of the state-space, and so are not typically tested in these environments. Given this, and that our primary contributions are of a theoretical nature, we believe that gym environment experiments would detract from, rather than complement, the presentation of our main results. That said, if there are suggestions for experiments that might improve our understanding of replicability in practice, we would be happy to run them!
* Could you specify which classical navigation tasks you are referring to? We considered several tasks from the standard RL book by Sutton et al. (2018) but found them too simple for the purposes of studying replicability. A key property for a good benchmark MDP is that there should be several $\varepsilon$-optimal policies that could be selected, so that we can demonstrate our algorithm chooses the same one with high probability over samples (for a fixed random string), and is therefore replicable.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I appreciate the authors' detailed and thorough responses to my questions, most of them are well-addressed. I still have a few questions remain:
1. It is better to have definitions self-contained. e.g., the definition of sampling failure probability.
2. Can the authors explain what insights can be drawn for the Deep-RL community from the finite space analysis? For example, does it indicates that sampling sufficiently many examples in every state (assuming we are able to query the dynamic models any many times as we need) will improve the consistency of learned policy?
3. On the contributions to the community given the current discoveries: while it is true that reproducibility in RL is essential, it is often in the sense that policies with similar performance can be achieved with high probability. For instance, in your Figure 1, I don't see finding the different approaches in solving the task as a bad property, and the factual problem RL algorithms have is more of they can not converge to similar performance. And this leads to large variance in RL evaluation and comparisons. The intrinsic difficulty is the stochasticity and intricate system dynamics during learning. And a natural approach to me seems to be starting from the offline-RL setting, where datasets are fixed and no stochasticity is introduced by exploration.
I've read the authors' responses to the AC regarding the same issue, and I acknowledge the good properties of such a definition. But as far as I'm concerned, those properties are somewhat driven by the analytical perspective, rather than driven by the existing problems in RL.
4. I understand the time remaining may not be sufficient for experiments, but I wonder if the authors can explain what changes should be made and what difficulty there might be to experiment in, for example, the MiniGrid suite.
Many thanks again for your response!
---
Reply to Comment 1.1.1:
Comment: Thank you for the additional questions!
**1. - Sampling failure probability**
We apologize for not being more specific here. The sampling failure probability is a common term in theoretical machine learning and refers to the probability that we draw a poor sample that is not representative of the population. For instance, imagine you are trying to estimate the bias of a coin and you flip the coin $10,000$ times. It is possible that all $10,000$ flips come up as heads. While not very probable, this scenario is not impossible. The sample failure probability accounts for these cases where we just got unlucky with our sample. It is commonly denoted $\delta$ and needs to be considered whenever we draw samples from a distribution. For instance, you can find definitions for it in our analysis in both Theorems.
**2 - Practical insights for the deep RL community**
A first crucial observation from our work is that replicability might not be computationally impossible to achieve. If we were not able to achieve replicability for RL procedures in finite sized MDPs, one might argue that there is little hope for large state-space MDPs. Our algorithms are a first stab at achieving such replicability and demonstrate viability.
Second, note that our algorithm does not achieve replicability by simply drawing more samples than the non-replicable versions of the algorithms. As such, we do not think that an insight from our work would be that drawing more samples will improve consistency of the learned policies. Clever randomized rounding and thresholding of the value function are required in order to obtain formal replicability and we believe that these might in fact prove useful tools when developing new deep reinforcement learning algorithms. We are actively considering how to integrate these efficiently into practical deep RL algorithms. As mentioned in our response to the AC, this might show itself as differentiable rounding procedures in offline RL or via meticulously selecting which data points to consider in novel experience replay techniques.
---
Rebuttal 2:
Title: Raise My Score
Comment: I really appreciate the authors' detailed response. Most of my concerns have been well addressed. So I raised my score from a 4 to 6.
To better enhance clarity, I would like to suggest the authors mention the explicit definition of replicability in policy learning in an earlier stage of the paper, preferably in the abstract/intro, to avoid any potential confusion and inaccurate expectation (e.g., distinguish from the more intuitive definition of "achieving similar performance"). | Summary: This paper studies replicable reinforcement learning. And they show that stochastic sample-based value iteration can be done replicably and explore the space of an MDP to find an optimal policy. Furthermore, they give some theoretical results. The effectiveness of the replicable algorithm is validated by simple experiments, requiring fewer sample than the theory suggests.
Strengths: - They first introduce the notion of replicability to RL because RL algorithms are difficult to reproduce.
- They provide two novel algorithms (replicable phased value iteration and Exploration) for replicable RL and also give corresponding theoretical results. This formulation provides a good foundation for the problem.
Weaknesses: - From this paper, at each iteration $t$, many episodes interacting with the environment are required. This is equivalent to modeling the environment. If so, it is trivial to obtain similar policies, even though there are some theories. The studies in this area concern a few episodes interacting with env at each iteration; after $T$ iterations, the two policies are close to each other. This is just like the Policy Iteration and the Value Iteration.
- The experimental evaluation is weak, though the paper pays more to the theoretical part.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Definition 2.3 is not clear. What does $r$ mean? This is the same definition as Impagliazzo et al. (2022). There is no citation!
- ''As a result, we expect that replicable estimation of MDPs is the hardest setting in stochastic RL, followed by replicable value function and then policy estimation'', if we don't know the MDP, how do we estimate the fine value function and policy function. I think that MDP is a foundation.
- ''a new threshold k′ is sampled uniformly from [k, k + w]. '', why is k' chosen at random instead of assigning one? What's your motivation for doing this?
- From the left figure of Fig.2, except for the number of steps, it can be seen the choice of $\rho_{SQ}$ is crucial. We should know how to choose it. Can you clarify why $\rho_{SQ}=\frac{\rho}{|S|}$ has a good result? And if we increase the number of steps, when $\rho_{SQ}=\frac{\rho}{|S|}$, do we still get good results?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors explained the limitations of their work well. I do not see an obvious negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer fsAq,
Thank you for your feedback. We appreciate that you think that our notion of replicability provides a good foundation for studying reproducibility in RL.
In response to the comments in the Weaknesses and Questions sections:
“at each iteration, many episodes interacting with the environment are required. This is equivalent to modeling the environment. If so, it is trivial to obtain similar policies, even though there are some theories.”
* Simply interacting with the environment would not give us that we can model it. It is significantly harder to model the full environments than to find a policy (see e.g., Kearns et al 1998). Could you elaborate which theories you are talking about? We are unaware of any theory that would ensure replicability and accuracy of any previous RL procedure.
"The studies in this area concern a few episodes interacting with env at each iteration; after iterations, the two policies are close to each other. This is just like the Policy Iteration and the Value Iteration."
* This seems to be a key misunderstanding. We are not asking that the two policies are close to each other but we are asking they be *identical* across different stochastic interactions with the environment. In our settings, transitions need to be sampled while algorithms like value iteration assume direct access to the transition function. The stochastic version would be Phased Value Iteration which we use in the paper. There is no guarantee for replicability (or policy similarity) in PVI. See Figure 1 for a counter-example.
“The experimental evaluation is weak, though the paper pays more to the theoretical part.”
* Our algorithms are tabular RL algorithms and we evaluate them on a tabular MDP in section 5. The experiments we conduct are explicitly designed with two goals in mind: a proof of concept for replicable algorithms in a non-trivial environment (i.e., an environment where there is more than one optimal policy, so that replicability is not already implied by optimality) and to show that the sample complexity overhead of replicability is not as onerous as might be suggested from the asymptotic upper-bounds. Our experiments cover both points. That being said, if there are suggestions for experiments that might improve our understanding of replicability in practice, we would be happy to run them!
“Definition 2.3 is not clear.”
* r is the shared internal randomness mentioned in the definition and should be stated in the text of the definition. Good catch; we added this in L87. We also added the citation to the definition directly to be more precise.
“if we don't know the MDP, how do we estimate the fine value function and policy function”
* Most RL algorithms do not assume to *know* the MDP, they merely have access to a sampling machine (e.g., an environment). We do not need to estimate the reward function and transition probabilities to run PVI but we can compute a value function without knowing the exact form of the MDP. This is cheaper in terms of sample complexity since the value function is significantly smaller than the transition function.
''why is k' chosen at random instead of assigning one?"
* This is done to achieve replicability. Suppose there are two runs of the algorithm (1) and (2). Our goal is to make updates to the set of known states K identical across runs, to establish downstream replicability. In (2), we would like to add to K exactly all elements that were added to K in (1). Assume there was a fixed threshold k. It could be that in expectation we will see some state-action $(s, a)$ pair enough times that it meets the fixed threshold k. Due to the stochastic nature of exploration, the realized transitions in (1) may cause us to visit $(s, a)$ only (k-1) times, while the realized transitions in (2) cause us to see it (k+1) times. A fixed threshold would mean that we don’t add $(s, a)$ to K in (1), but do add it to K in (2), making replicability difficult to reason about. Yet, we can guarantee that the realized number of visits to a state-action pair concentrates around its mean. By randomizing the threshold k over a large interval, we can ensure with high probability that k does not fall between the two realized values for visitation counts. Hence, the algorithm makes the same decisions about whether $(s, a)$ is added to K in both (1) and (2) (see proof of Theorem 4.2). We have updated the paper to provide this motivation in the algorithmic exposition.
“Can you clarify why has $\rho_{SQ} = \frac{\rho}{|S|}$ a good result? And if we increase the number of steps, when $\rho_{SQ} = \frac{\rho}{|S|}$, do we still get good results?”
* Informally, the replicable statistical query algorithm uses its random string to partition the [0,1] interval into subintervals, and assigns canonical representatives to each subinterval. It empirically estimates the value of the query using its sample, and then returns the nearest canonical representative to the empirical estimate. For a fixed sample, taking smaller values for $\rho_{SQ}$ improves replicability at the cost of accuracy of query responses, by increasing the width of each subinterval of the partition so that there are fewer partition elements overall. This way, two estimates that are close are likely to be rounded to the same representative, but the possible error induced by this rounding will increase.
* As long as the sample sizes are sufficiently large and $\rho_{SQ}$ is chosen small enough, we expect that we achieve high replicability. The current sample size of 30 runs was not representative to reflect this so we increased the number of runs we compute identical value functions over to 150. The new plots (appended in PDF) show that with a large sample size, any $\rho_{SQ}$ smaller than $\rho_{SQ} = \rho/|S|$ works well. In all cases, $\rho_{SQ} = 0.2$ results in poor performance arguably because the subintervals are too small such that the likelihood of falling into different bins is too high. | Summary: This paper proposes a new reinforcement learning algorithm based on the replicability crisis and gives proof for the proposed method, providing a new perspective in this field.
Strengths: 1. The idea of replicable reinforcement learning is brand new and may provide a new perspective for reinforcement learning.
2. The proof of the related lemma is sufficient and strict.
Weaknesses: 1. The experiments are insufficient. Though the proposed replicable reinforcement learning requires much fewer samples, the performance of the replicable RL lacks.
2. More comparable experiments (i.e. DDPG, TD3, A3C, SAC, etc.) should be carried out to show the effectiveness of the proposed method.
3. It is better to provide some insightful analysis or conclusion from the replicable RL.
4. The organization may lack cohesion and coherence. Though several definitions and lemma are introduced, the relationship and function of each lemma lack a detailed claim.
5. The proposed replicable RL should be evaluated in the public benchmark such as gym, MuJoCo, etc.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please refer to the Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have discussed the limitations in details. Nevertheless, the proposed method should be wildly evaluated in different benchmarks and provide some insights to readers, especially for a new idea.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer BN9y,
Thank you for your feedback, we appreciate that you see the direction of replicable reinforcement learning as a novel perspective.
In response to the comments in the Weaknesses and Questions sections:
“The experiments are insufficient. Though the proposed replicable reinforcement learning requires much fewer samples, the performance of the replicable RL lacks.”
* Would you be willing to clarify this point? Our proposed algorithms require more samples than the standard versions of these algorithms and it is not clear to us what is intended by “requiring fewer samples”. The algorithms we provide are formally replicable **and** $\varepsilon$-optimal. In all experiments, the learned policies are $\varepsilon$-optimal, so the performance can technically not be better than that. (see Theorem 4.1 and 4.2)
“More comparable experiments (i.e. DDPG, TD3, A3C, SAC, etc.) should be carried out to show the effectiveness of the proposed method.”
* We think that comparison to common benchmark deep learning methods is inappropriate for this stage of replicable reinforcement learning research. As we show in our work, designing reinforcement learning algorithms that can learn approximately optimal policies while simultaneously satisfying replicability is a challenging task that requires new technical tools and algorithm design. Our work initiates the study of replicable reinforcement learning, building a technical toolkit for replicable RL toolkit that we hope can be used to obtain more performant algorithms in the future, but our techniques in their current form are not applicable to non-linear function approximation or deep learning. Even linear function approximation would require the development of completely new tools for replicability. We agree that this is an interesting direction for future work, and it is a line of research we plan to pursue, but there are substantial technical leaps to make before we can guarantee replicability of deep learning approaches.
“It is better to provide some insightful analysis or conclusion from the replicable RL.”
* As we state in the introduction, we provide novel algorithms that are formally replicable. (Line 35 and following) These algorithms are provably correct and replicable with high probability (see Theorem 4.1 and 4.2). The analysis of the algorithms is provided either in the Appendix B.1 and B.2 for Algorithm 1 and in large parts in the main text (L215 - 276) or Appendix B.3 for Algorithm 2. The analysis of the empirical results is done in L298-305. The conclusion is provided in the Conclusion & Future work section. Would you mind clarifying what additional analysis we could add to improve the paper?
“The organization may lack cohesion and coherence. Though several definitions and lemma are introduced, the relationship and function of each lemma lack a detailed claim.”
* As stated in the text, Definitions 2.1 and 2.2 are used to describe the theoretical settings we consider. Definition 2.3, 2.4 and Theorem 2.1 frame the context of the work and provide the required formal background. Definition 3.1 is one of the contributions and is used to derive the algorithms we propose. Both Theorem 4.1 and 4.2 satisfy this Definition 3.1 (see their respective text). Lemma 4.1 is used to provide the accuracy argument for Theorem 4.1 (see L166-167) (see full proof in Appendix B.1 and B.2). Lemma 4.2 (L215, typo here) is used to prove convergence of Theorem 4.2 (see L230). This should cover all the definitions and lemmas in the main text and all of them are explicitly used in some main claim or are used to frame context and setting. Can you elaborate which specific definition or lemma is unclear?
“The proposed replicable RL should be evaluated in the public benchmark such as gym, MuJoCo, etc.”
* Our Algorithms are tabular reinforcement learning algorithms and we evaluate our algorithm on a tabular MDP in section 5. The experiments we conduct are explicitly designed to demonstrate that our algorithms replicably converge to good policies, not just that they learn good policies, and moreover that the sample complexity overhead of achieving replicability in practice may be less significant than what is suggested by the asymptotic upper-bounds of our theoretical guarantees.
* Tabular reinforcement learning algorithms do not generally scale well with respect to the size of the state-space, and so are not typically tested in gym environments. Given that our algorithms are tabular, and that our primary contributions are of a theoretical nature, we believe that gym environment (or similar) experiments would detract from, rather than complement, the presentation of our main results. That being said, if there are suggestions for experiments that might improve our understanding of replicability in practice, we would be happy to run them!
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification provided by the authors.
The reviwer tend to keep the original score. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you for your valuable feedback! We are grateful for the questions, suggestions, and opportunity to clarify the intent of our work.
We would first like to provide a set of common answers to the points raised by reviewers here. We will provide the responses to individual reviewer’s questions in the appropriate, reviewer-specific response form.
First and foremost, we would like to highlight that this is a theoretical manuscript with a focus on providing the first-ever results for formally replicable reinforcement learning. Experimentation is secondary and is done to validate the potential usefulness of general replicable algorithms for real-world problems in the future. We are excited that several reviewers see the novelty and need for such algorithms and that the reviewers believe that our claims are well supported. For other commonly highlighted comments or questions:
Deep Learning Baselines: Several reviews ask about comparisons to deep learning methods. We can only stress again that this is the first fundamental work on formally replicable reinforcement learning, and we think that comparison to common benchmark deep learning methods is inappropriate for this stage of replicable reinforcement learning research. As we show in our work, designing reinforcement learning algorithms that can learn approximately optimal policies while simultaneously satisfying replicability is a challenging task that requires new technical tools and algorithm design. Our work initiates the study of replicable reinforcement learning, building a technical toolkit for replicable RL that we hope to further develop in the future, but our techniques in their current form are not applicable to non-linear function approximation or deep learning. Even linear function approximation would require the development of completely new tools for replicability. We agree that this is an interesting direction for future work, and it is a line of research we plan to pursue, but there are several substantial technical leaps to make before we can guarantee replicability of deep learning approaches.
Choice of algorithms. We start at the very beginning of sample complexity analysis and use two of the early algorithms that already prove to be quite challenging in the setting we select. Phased Value Iteration (sometimes Phased Q-Learning) - as the name suggests - is an algorithm that effectively does Value Iteration (repeatedly) when having access to sampled transitions rather than the full transition model. This is the first step of introducing stochasticity to dynamic programming-based algorithms such as classical Value Iteration and it already bears a significant overhead for replicability, and necessitates new techniques not previously appearing in the RL literature.
The second setting we consider is that of episodic exploration and, here, an initial class of algorithms to start with is the E^3 or RMax style algorithms. These algorithms contain a natural notion of previously explored states via their sets of known states. We use these sets of known states as a proxy for what parts of the space have been explored and try to replicably have the same sequence of states be added to K. This choice is made to create a way of measuring and comparing exploration across two runs.
Experiments: Our algorithms are tabular reinforcement learning algorithms and we evaluate our algorithms on a tabular MDP in section 5. The experiments we conduct are explicitly designed with two goals in mind: a proof of concept for replicable algorithms in a non-trivial environment (i.e., an environment where there is more than one optimal policy, so that replicability is not already implied by optimality) and to show that the sample complexity overhead of replicability is not as onerous as might be suggested from the asymptotic upper-bounds. Our experiments cover both points.
It was suggested that we include experiments in a classical gym environment as well. Tabular reinforcement learning algorithms do not generally scale well with respect to the size of the state-space, and so are not typically tested in these environments. Given that our algorithms are tabular, and that our primary contributions are of a theoretical nature, we believe that gym environment (or similar) experiments would detract from, rather than complement, the presentation of our main results. That being said, if there are suggestions for experiments that might improve our understanding of replicability in practice, we would be happy to run them!
We have integrated your feedback into the draft of the manuscript. Here is a brief summary of the changes we are going to add:
1.) We found a minor bug in the analysis of the Replicable R-Max algorithm which we have fixed, and adjusted the rates accordingly.
2.) We made clarifying changes on the notation in Definition 2.3 and fixed a typo in the enumeration of Theorem from (previously) 4.1 to (now) 4.2.
3.) We updated the experimental plots using a more representative sample (new plots in PDF) and included a more intuitive explanation of the analyzed parameter $\rho_{SQ}$.
4.) We extended the informal description of the RepRMAX algorithm to provide better intuition for the randomized threshold.
Pdf: /pdf/568fbc0f0fad2d4c9caf7508635a3de70d3249ff.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Understanding Deep Gradient Leakage via Inversion Influence Functions | Accept (poster) | Summary: **Key Contribution:** This work proposes a new method for analysis of privacy risk in the deep leakage from gradients attack, which does not rely on assumptions of the model architecture or attack optimization method.
**Approach:** The work defines an “Inverse Influence Function” (I2F) which is able to determine some metric of privacy risk given an applied defense to gradients, irrespective of the model architecture or attack optimization technique. The authors use their framework to identify that the eigenvalues of the matrix $JJ^T$ greatly impacts the MSE of any possible recoveries - in particular, the paper identifies that the smallest eigenvalues can greatly reduce the risks of leakage.
**Evaluation:** The paper evaluates their approach using LeNet and ResNet18, trained on MNIST and CIFAR10. They consider the impacts of different eigenvalues on the predicted MSE in their I2F framework, and find that the most risky samples (i.e. lowest MSE recoveries) tend to be those where the jacobian had large eigenvalues. The paper also evaluates the impacts of different model initialization schemes on recovery MSE.
Strengths: **Originality:**
The paper reframes analysis of DLG attacks in terms of prior work by Koh and Liang. In this way, it is an original framing of the DLG problem, using established techniques.
**Quality:**
The paper is of good quality, providing interesting conclusions and analysis of the I2F approach.
**Clarity:**
The paper is well-written and clear - the mathematical steps are simple and easy to follow, with intuitive results.
**Significance:**
The paper builds a simple yet intuitive mathematical framework to estimate privacy risk in DLG attacks. The key significance here is in the simplicity and efficiency of the approach, which allows it to be easily used in practice.
Weaknesses: **Originality:** The paper claims that prior work by Fan et al. (2020) is not generalizable to different architectures, such as convolutional networks. However, in Fan et al. (2020) Appendix A it is established that such layers can be re-written as fully-connected ones. It is unclear if the paper’s claims about the limitations of prior approaches are true. A more in-depth comparison with prior would help establish the novelty of the paper.
**Limited Evaluation:** The scope of the evaluation could be broadened to further strengthen the paper’s conclusions. It would be interesting to see how the estimation of MSE changes with respect to training iterations - prior work (as cited in the paper) demonstrates that trained models are more susceptible to the DLG attack, and hence justifying this empirical finding with the I2F framework would lend more credence to its utility. Moreover, the paper only considers two defenses: gradients perturbation, and mixing of training samples. However, the paper could be stronger if it considered more defenses e.g. gradient pruning, which is a commonly considered defense.
**Limited Applicability to Realistic Scenarios:** The paper cites Huang et al. 2021 and mentions that BN statistics are needed for a realistic inversion. However, it is unclear how the lack of BN statistics impacts the I2F framework’s utility in a realistic scenario.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - (40) What does “converging to the optimal attack” mean in this sentence? Do you mean a perfect recovery of the training data? Or that the optimization method used in the attack itself is optimal?
- (145) Only considering MSE recovery leaves out situations where the recovered image is semantically the same as the private image, but differs in individual pixels. Have you considered alternative metrics which are less sensitive to individual pixels e.g. total variation?
**Minor nits:**
The original attack is referred to as “Deep Leakage from Gradients” - is there a reason you chose a different terminology in your paper?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors sufficiently addressed the potential social impact and lmiitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for the positive views of the quality and significance of our work! We are glad to address your concerns.
**W1: (Originality)** Fan et al. (2020) can generalize to other architectures, like CNNs.
**A1:** Thanks for the question.
- Fan et al. (2020) mentioned that their method can be generalized to convolutional layers. We apologize for the mistake and will revise it in the final paper.
- The main difference between our work and Fan et al. (2020): Fan et al. (2020) provided a good understanding of when the initial input data be determined and propose an **upper bound of the relative error**, which measures the best-case privacy risk considering the weakest attack. However, our work assumes there exists a stronger attack and focuses on the **lower bound of the RMSE**. Thus, what we consider is the worst-case privacy risk. Both the upper and lower bound are important to better understanding towards gradient inversion attack. So we believe **Fan et al. (2020) and our work are both critical to gradient inversion attack**.
- **Moreover**, other privacy quantifications like differential privacy [E, F] also measure the worst-case privacy risk, so we believe our target following prior work on the worst case is reasonable.
**W2 (Limited Evaluation):** How does I2F change in training? Evaluate I2F on grad pruning.
**A2:** Thanks for the suggestion.
- **I2F during training:** In Fig.4 of the attached PDF, we evaluate the change of $I_{lb}$ and recovery MSE during model training. It is observed that **$I_{lb}$ increases by epochs, which means a decreasing privacy risk**. This is consistent with the previous existing empirical results that a well-trained model is more difficult to be inverted by gradient than a randomly initialized model [B, C] (as we claimed in lines 290-292).
- **I2F with gradient pruning:** In Fig.1 of the attached PDF, we evaluate I2F under the gradient pruning defense where I2F is linearly correlated with RMSE. Thus, **I$^2$F is generalizable with gradient pruning defense**.
We will add this interesting discussion to the final paper.
**W3 (Limited Applicability):** Based on Huang et al. (2021), how does the lack of BN impact the I2F framework’s utility
**A3:** Thanks for the comments.
- First, we are at a different stand than Huang et al. (2021). Huang et al. aim to strengthen the attack by leveraging the BN statistics which was shown to be more effective than attacks without BN statistics. In other words, **lacking BN statistics as regularization may weaken the attacks**.
- In contrast, we aim to estimate the risks from potential gradient inversion attacks from the victim's stand. The victim, who provides the gradient, always has access to the full network including BN statistics. **Without BN statistics, we may underestimate the risks and therefore leave our victim at stake**.
- Unlike Huang et al., BN statistics are not explicitly used for regularizing and improving inversion attacks in I2F. With the same I2F formulation, **we can estimate the risks from both vanilla gradient inversion or the one with BN-guided regularization (Huang et al., 2021)**.
**Q1:** What does “converging to the optimal attack” mean (line 40)
**A1:** We assume the attack is perfect given the exact gradient of a sample.
- Note that the sample itself is a solution for Eq(1). If the solution of the inversion attack is unique, then it is equivalent to saying the sample is the unique optimal solution for solving Eq (1). Thus, given the exact gradient of a sample, the attack can exactly recover the sample.
- Even if the solution is not unique, we can still essentially assume the attack can attain the sample in the worst case.
- When noise exists, it is unlikely to recover the original images, but we assume the attack can recover the image that is closest to the original image. We will revise the paper to make it more clear.
**Q2:** MSE lacks semantic info. Try other metrics that are less sensitive to individual pixels.
**A2:** Thanks for the suggestion.
- We follow most existing work [B, C, D] to use MSE (or PSNR, which is negatively correlated with MSE) as the default measure but we agree on the insufficiency of MSE for larger and more complex models.
- In Fig.3 of the attached PDF, we try two metrics that capture more structural and semantic information between images, i.e., SSIM and LPIPS[A]. SSIM measures the structural similarity between two images, and LPIPS measures the semantic distance. We consider RN18 on CIFAR10 and LeNet on MNIST both with GS and DGL attacks. We find that **$I_{lb}$ is linearly correlated with these two metrics and is a good estimator for the structural similarity and semantic distance between the original and recovered images**.
**Minor:** Why use DGL instead of DLG (Deep leakage from gradients)
**A:** We used DGL because that “Deep Gradient Leakage” is more concise than “Deep Leakage from Gradients”. We will revise in final paper to avoid confusion.
[A] Zhang, Richard, et al. "The unreasonable effectiveness of deep features as a perceptual metric." CVPR. 2018.
[B] Balunović, Mislav, et al. "Bayesian framework for gradient leakage." arXiv (2021).
[C] Geiping, Jonas, et al. "Inverting gradients-how easy is it to break privacy in federated learning?." NeurIPS (2020).
[D] Zhu, Ligeng, Zhijian Liu, and Song Han. "Deep leakage from gradients." NeurIPS (2019).
[E] Dwork, Cynthia. "Differential privacy." International colloquium on automata, languages, and programming. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006.
[F] Abadi, Martin, et al. "Deep learning with differential privacy." ACM SIGSAC. 2016.
---
Rebuttal 2:
Title: Follow-up on rebuttal and a kind reminder
Comment: Dear Reviewer X287,
We want to thank you for your constructive suggestions and thoughtful reviews, which are valuable to improving our paper. As a follow-up on our rebuttal, we would like to kindly remind you that the close date of the discussion is approaching. We hope to use this open response window to discuss the paper, answer follow-up questions, and improve the quality of our paper. Have you gotten a chance to read our rebuttal, in which we tried our best to address your concerns? We want to make sure that you found our responses solid and convincing. And we would be more than happy to provide more information or clarification.
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for the thorough responses to my concerns, and the additional follow-up reminder.
I am satisfied with the author's responses to my questions, and have raised my score of the paper. My only suggestion is that the authors include more detail about the scenario related to batchnorm statistics, as prior DGL attacks assume access to these statistics and include them as a regularization term.
---
Reply to Comment 2.1.1:
Title: Thanks for raising your score
Comment: We are glad that all your concerns are addressed. We really appreciate your valuable comments and acknowledging the contribution of our work! Based on your suggestion, we will revise our final paper to include a more detailed discussion on BN prior. | Summary: This paper proposes inverse influence function (IIF), the indicator of how reconstructed input (from gradient inversion attack) changes with respect to a gradient change. This function can be simply formulated using Jacobian and gradient change. The correlation between the proposed measure and reconstruction quality is demonstrated both theoretically and empirically. Also, IIF can predict the vulnerability of gaussian perturbation (probably for privacy protection) based on Jacobian (or sample). Using this fact, the sample whose Jacobian has larger maximum eigenvalue is considered to be still unsafe from perturbation-based protection. This fact is also empirically demonstrated on MNIST. Various model initialization schemes are also analyzed using eigenvalues of Jacobian.
Strengths: 1. The paper is easy to follow.
2. The formulation of IIF is novel and interesting.
3. The correlation between the proposed measure and reconstruction quality is empirically demonstrated on several datasets
4. Both theoretical and experimental analyses on Gaussian perturbation based privacy protection using IIF is novel and important.
Weaknesses: 1. There is no experimental result on large-scale dataset and large DNNs.
2. The worst-case assumption can be very strong in some cases when optimal point from the attack does not reach ground truth.
3. The theory is strongly based on L2 distance between gradients. How about GS attack case, which is based on cosine-similarity loss?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses for basic questions.
An another question: what is the conclusion from several analyses on model initialization or perturbation-based protection using IIF?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitation is included in the submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate the valuable comments from the reviewer. We are glad to address the concerns.
**W1:** No experiments on large DNNs and large datasets
**A1:** Thanks for the suggestion. In Fig.5 of the attached PDF, we evaluate our metric on the large model (ResNet152) and large dataset (ImageNet).
1. For larger models, the MSE is no longer a good metric for the recovery evaluation. Even if state-of-the-art attacks are used and the recovered image is visually similar to the original image in Fig.5(b), the two images are measured to be different by MSE, due to the visual shift: The dog head is shifted toward the left side. To capture such shifted similarity, we use LPIPS [C] instead, which measures the semantic distance between two images instead of the pixel-to-pixel distance like MSE.
2. Fig.5(a) shows that I2F is correlated to LPIPS using large models and image scales. This implies that **I2F is a good estimator of recovery similarity**.
**W2:** The worst-case assumption can be very strong in some cases when the optimal point from the attack does not reach ground truth.
**A2:** Thanks for the comments.
- It is easy to see that the ground truth (the private image) is one optimal solution for the attack objective. The problem may be if the optimization algorithm can converge to the desired optimal solution. If this is the problem, we believe the attack can evolve to be much stronger in the near future. We have already seen many recent attack approaches that are getting increasingly powerful [D, E]. Thus, it is important to bound the risk in advance before the risk becomes true.
- **In addition**, worst-case risk estimation is a common practice. For example, differential privacy [A, B] bound the worst-case chance that the sample is identifiable.
**W3:** Theory is based on L2-norm attack. How about the cos-sim attack?
**A3:** Thanks for this question. **Our metric is also applicable for evaluating cos-sim attacks**.
- Note that the minimizer of the L2-norm attack is one solution to the cos-sim (GS) attack. That means **the cos-sim attack can be attained by an optimal L2-norm attack which is our assumption**. Therefore, the L2-norm-based theorem is applicable to the cos-sim attack, as well.
- **Empirically**, we evaluate $I_{lb}$ on GS attack based on cosine-similarity inversion loss as in Fig.2 and Fig.12 of the main body and appendix of our paper, respectively. It shows that **$I_{lb}$ is linearly correlated to the metrics of MSE, which proves the utility of our metric under GS attack**. We will include the discussion in the revision.
**Q1:** What is the conclusion from several analyses on model initialization or perturbation-based protection using IIF?
**A1:** Thanks for the question.
1. Firstly, we find that the **unfairness of privacy protection exists in both samples and classes**. Even for the same noise (distribution), some samples or classes may be more vulnerable to leak privacy as shown in **Fig.5**.
2. Secondly, **the protection effect of the perturbation itself is not fair**, which is implied from the results that the eigenvectors with smaller eigenvalues are more effective on protection as shown in **Fig.4**, even though all the eigenvectors have the same norm.
3. Thirdly, since a well-trained model is much more difficult to be inverted with gradient, we evaluate the effect of different initializations on inversion and find that **Kaiming and Xavier have less privacy risk than the normal and uniform initializations** as shown in **Fig.6**. We will include a more clear conclusion in the revision.
[A] Dwork, Cynthia. "Differential privacy." International colloquium on automata, languages, and programming. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006.
[B] Abadi, Martin, et al. "Deep learning with differential privacy." ACM SIGSAC. 2016.
[C] Zhang, Richard, et al. "The unreasonable effectiveness of deep features as a perceptual metric." CVPR. 2018.
[D] Haim, Niv, et al. "Reconstructing training data from trained neural networks." NeurIPS (2022)
[E] Kariyappa, Sanjay, et al. "Cocktail party attack: Breaking aggregation-based privacy in federated learning using independent component analysis." ICML, 2023.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: Thanks for the rebuttal.
Response to A1 : Did you apply cos-sim attack in large DNN experiments? As far as I know, the third dog image can be recovered well using cos-sim attack in the paper "Inverting Gradients -- How easy is it to break privacy in federated learning?" (Geiping et al.). Are hyperparameters are tuned enough?
Response to A2 : The original purpose of this work is to measure the "change" in reconstruction quality from gradient manipulation. I appreciate that sometimes it is hard to derive the formula without such an assumption. Nevertheless, the application of I^2F seems questionable when input cannot be recovered perfectly from gradient. Can we focus only on "change" without such assumption?
Response to A3 : Unfortunately, I cannot agree with the assumption that cos-sim attack is the optimal L2-norm attack.
Response to (A to Q1) : Thanks for addressing the question.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer VRVE
Comment: Thanks for your response. We are glad to address your concerns.
- **R1:** Yes, we apply cos-sim attack following [F] (Geiping et al.) in the large DNN experiments. We follow their hyperparameters and tuned them in our case to ensure a high-quality reconstruction.
1. Though the original dog image is the same in the two experiments, our experiment setting is different from and is harder than the one of the mentioned dog image Fig.1 in [F]. Our experiment is conducted on a hard setting with noised gradient instead of raw gradients. With a clean gradient, the visual results of the dog image from [F] and our experiments are very similar.
2. We have carefully tuned the hyperparameters and **our results are consistent with those in [F]**. In Fig.13 and 14 in [F], the visual results of recovered images for RN152 on ImageNet are also worse than those of RN18. Meanwhile, both [F] presents similar visual shift as our results in the recovered images.
3. Worth to mention, the large DNN experiments aim to show the relationship between our metric and LPIPS with a large model and dataset. Fig.5 in the attached PDF already shows that **I2F is a good estimator of recovery similarity**.
- **R2:** Thanks for your follow-up questions.
- First, we argue that the worst-case assumption is not proposed for the ease of derivation but is essential and common in practice.
1. The original image is always an optimal solution for the inversion loss in Eq.1., therefore **theoretically, there always exists an algorithm that can converge to the original image**.
2. To our best knowledge, there is no evidence to show the attack cannot approach the worst case where the original input is recovered. Instead, empirical results have shown that the images can be recovered almost perfectly [F] (Geiping et al.). Thus, due to the sensitivity of privacy, a worst-case assumption is necessary to strictly bound possible privacy risks with arbitrary strong attacks, which is commonly imposed by the literature [A, B, G].
- Second, it is an open yet interesting question to investigate the change without the worst-case assumption. However, we argue that such analysis has the following difficulty:
- To analyze the change of reconstruction, we need to know the recovery with the clean gradient. Nevertheless, without the worst-case assumption, it is hard to quantify the relationship between the original and recovered images. We will further discuss this as a future direction in our final paper.
- **R3:** We agree that the cos-sim attack is not the optimal L2-norm attack, which is **NOT** our claim, either. We believe there is a misunderstanding regarding our explanation of the relationship between the L2-norm and cos-sim attack. We argue that an optimal L2-norm attack solution is also an optimal one to the cos-sim attack, but not vice versa, because the cos-sim attack only considers the angle between the gradients of the original and recovered images. In other words, the L2-norm-based conclusion should apply to cos-sim attacks, (but not vice versa).
[A] Dwork, Cynthia. "Differential privacy." International colloquium on automata, languages, and programming. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006.
[B] Abadi, Martin, et al. "Deep learning with differential privacy." ACM SIGSAC. 2016.
[F] Geiping, Jonas, et al. "Inverting gradients-how easy is it to break privacy in federated learning?." NeurIPS 2020.
[G] Guo, Chuan, et al. "Bounding training data reconstruction in private (deep) learning." ICML, 2022 | Summary: The authors propose Inversion Influence Function ($I^2F$), a closed-form lower-bound approximation that estimates the recovery $L_2$-norm caused by gradient perturbation in gradient inversion attacks. Detailed mathematical proof and experiments are provided, with comparisons of privacy vulnerability with regard to data, model, perturbation, and attack methods.
Strengths: - The theorem is supported with detailed math proof and experiment results.
- $I^2F$ gives a good approximation of privacy vulnerability with regard to data, model, perturbation, and attack methods. Most importantly, one can use it to find a theoretically optimal direction to perturb the gradient in federated learning.
Weaknesses: - In Lemma B.3. (appendix), $\nabla_x L_I(x;g) = \nabla_x \|\| \nabla_\theta L(x, \theta) - g \|\|^2 = 2 \nabla_x \nabla_\theta L(x, \theta) (\nabla_\theta L(x, \theta) - g)$, the factor $2$ is missing.
- For figure 7 in the appendix, there's no explanation of what different lines mean, and it makes no sense to compare the running time of power iteration to that of inversion attack.
- There should be quoted conclusions/studies on the validity of assumptions, especially the actual value of $\mu_L$ and $\mu_J$. If they just exist but take very large values, Eq. (7) will be nonsense.
- $I_{lb}$ in Fig. (1) is an approximation of Eq. (4) and is different from $I_{lb}$ defined in Eq. (5), and the computation speedup described in "Efficient Evaluation" is not used (only the convergence speed is evaluated).
- $L_2$-norm is used in almost all math equations, but the experiments use RMSE/MSE, which makes it hard to tell how tight the approximated lower bound is.
- In Sec 5.3, there exist both $\sigma(\theta^Tx)-1$ and $\sigma(\theta^Tx)-b$, which looks like an incomplete replacement.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - What is the perturbation scale used in Fig. (4)?
- In Fig. (5), are gradient perturbations sampled from Gaussian noise, or do the follow the direction of the eigenvector with the smallest eigenvalue?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The proposed $I^2F$ relies on 5 assumptions that may not hold true. Out of these assumptions, 3.2 is satisfied by common loss functions, 3.3 is bypassed by "Extension to singular Jacobians". However, other assumptions, as well as the variables they introduce ($\mu_J$ and $\mu_L$), have not been well discussed.
> After rebuttal
We found the proposed method partially overlaps with [the Fisher Information Loss](https://arxiv.org/pdf/2102.11673.pdf), which weakens its technical novelty. However, there are still differences between the two works in terms of the lower bounds and algorithms they arrive at. I agree with AC that this work could be accepted conditioned on that a more thorough discussion of prior works is included in its final version.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate the affirmation of our contribution.
**W1:** In Lemma B.3, factor 2 is missing
**A1:** Thanks for pointing this out. We will revise it accordingly.
**W2:** The explanation of Fig.7 and why comparing the convergence of power iteration and the attack.
**A2:**
- **What do different lines mean in Fig.7:**
- In Fig.7(a), different lines mean the convergence of power iteration under different random seeds.
- In Fig.7(b), different lines mean the convergence of the inversion loss under different learning rates (1, 0.5, 0.1, 0.05, 0.01). For each line in (b), it is repeated with 5 different random seeds.
- **Why compare the running time of power iteration with the attack:**
- Conducting the inversion attack can be time-consuming and computationally intensive. **I2F is an attack-free analysis tool that can be used to evaluate the privacy risk given a sample and a perturbation. The efficiency gap between calculating I2F and conducting the inversion attack is a critical metric of the utility of I2F in practice.** Since we use $\frac{||J\delta||}{\lambda_{\max}(JJ^T)}$ to measure worst-case privacy risk, the computation cost exists in the numerator and denominator, respectively.
- The cost of the numerator can be calculated with a Jacobian-vector product, which is equivalent to two times of gradient computation with a vector production. So the numerator can be calculated efficiently with frameworks like PyTorch and its cost is constant for a given model and dataset.
- Thus, **the main cost comes from the denominator, which uses power iteration to calculate the maximum eigenvalue of $JJ^T$**. In Fig.7, we show the convergence of power iteration is about 20 times faster than that of the inversion loss, which means that our metric can evaluate the privacy risk accurately and more efficiently.
**W3:** The validity of assumptions, especially the actual values of u_L and u_J. If they are large, then Eq7 makes no sense.
**A3:** Thanks for the comments.
1. Assumption 3.1 is the assumption of a perfect attack, which is considered the worst-case privacy risk as we claim in lines 133-135. Worst-case risk measurement is currently used in many other areas, such as differential privacy [C]. As gradient inversion attacks are evolving stronger over time, we believe this assumption is reasonable and makes our measurement useful in the future.
2. Assumption 3.2 is satisfied with the common loss function like cross-entropy loss and was used in the literature [A, B].
3. For assumption 3.3, first, we agree that “this assumption is bypassed by the ‘Extension to singular Jacobians’” as shown in the limitations by the reviewer. Moreover, we show in Fig.1 and 2 that the lower bound of $I_{lb}$ is a good estimator of I2F. Thus we can directly use $I_{lb}$ to estimate worst-case privacy risk even when $JJ^T$ is singular.
4. Assumptions 3.4 and 3.5 is not necessary for I2F because I2F only depends on assumptions 3.1-3.3 (lines 151-152). Assumptions 3.4 and 3.5 are only used to provide a theoretical validation of I2F when the noise $\delta$ is not infinitesimal. We also calculate on LeNet the values of $\mu_J$ and $\mu_L$ in assumptions 3.4-3.5. **For CIFAR10, $\mu_L=0.5014$ and $\mu_J=1.7\times10^{-13}$. For MNIST, $\mu_L=0.7291$ and $\mu_J=3.7\times10^{-13}$.** These values are not so large that they are reasonable in practice.
**W4:** I_{lb} in Fig1 is approx of Eq4, and is different from I_{lb} in Eq5. Computation speedup in “Efficient Evaluation” is not used.
**A4:**
- **$I_{lb}$ in Fig.1 is different from that in Eq.5:**
- We apologize for the confusion. The x-axis in Fig.1, i.e. *$I_{lb}$ (matrix norm)*, is calculated as defined in Eq.5. The matrix norm here is defined as $\|A\| = \sup_{x\ne0} (\|Ax\|/\|x\|)$ in line 122. The y-axis in Fig.1 is calculated as Eq.4, which is $(JJ^T)^{-1}J\delta$. We will clarify to avoid such confusion in the revision.
- **The computation speedup in “Efficient Evaluation” is not used:**
Three computation speedup has been proposed in “Efficient Evaluation”.
- Speedup (1) of efficient evaluation of $J\delta$ is used in the whole paper wherever we need to compute the Jacobian-vector product.
- Speedup (2), the efficient matrix inversion, is used in Fig.1, where we need to calculate $I$ (matrix inversion) as $(JJ^T)^{-1}J\delta$ in Eq.4. In Fig1., the results indicate that the lower bound $I_{lb}$ is a good estimator of $I$, so we use $I_{lb}$ instead in the following experiments.
- Speedup (3), the efficient evaluation of the Jacobian norm, is used in the whole paper wherever we need to compute the norm of the Jacobian.
**W5:** The L2-norm is used in all math equations, but the experiments use MSE/RMSE, which makes it hard to tell how tight the approximated lower bound is.
**A5:** We apologize for the confusion. There is a typo in Eq.1, in which the definition of recovery MSE should be the **root** of recovery MSE as $||x_0-G_r(g_0+\delta)||$. Also, please kindly check the caption of Fig.1 where RMSE is short for **root of mean square error** instead of *recovery MSE*. We will revise Eq.1 to make it more clear.
**W6:** In Sec5.3, both 1 and b exist.
**A6:** Thanks for pointing this out. There is a typo and it should be $\sigma(\theta^{T}x)-b$ instead of $\sigma(\theta^{T}x)-1$. We will fix this in the revision.
**Q1:** What is the perturbation scale used in Fig4
**A1:** In Fig.4, we use the eigenvector as the perturbation so the scale is 1.
**Q2:** In Fig5, is the perturbation Gaussian noise or eigenvector?
**A2:** The perturbation is sampled from Gaussian distribution as in the figure caption.
[A] Guo, Chuan, et al. "Bounding training data reconstruction in private (deep) learning." ICML, 2022.
[B] Hannun, Awni, Chuan Guo, and Laurens van der Maaten. "Measuring data leakage in machine-learning models with Fisher information." UAI, 2021.
[C] Dwork, Cynthia. "Differential privacy." Springer Berlin Heidelberg, 2006.
---
Rebuttal Comment 1.1:
Comment: I appreciate the efforts the authors spent on their rebuttal! It solves my concerns and I've raised my score.
---
Reply to Comment 1.1.1:
Title: Thanks for raising your score
Comment: We are glad that all your concerns are addressed. We really appreciate your valuable comments and acknowledging the contribution of our work! We will revise our final paper based on your suggestions. | Summary: This paper proposes to leverage influence function as a tool for understanding and analyzing the privacy risk in gradient leakage by connecting the private gradients with the recovered images. Inversion Influence Function (I^2F) is introduced as an efficient approximation of deep leakage attacks. Theoretical justification is provided for this approximation and empirical analysis is conducted on two image datasets (MNIST and CIFAR10).
Strengths: 1. Understanding privacy leakage through gradients is an important and timely topic.
2. It is an original work and perhaps the first to leverage influence function to perform analysis on deep gradient leakage.
3. The analysis provides an explanation for sample- and class-wise variance in reconstruction quality (referred to as “unfair privacy” in the paper).
Weaknesses: 1. For complex neural networks with non-convex loss functions, the inversion mapping Gr may not be bijective, e.g., there might exist multiple x that induce a similar gradient.
2. The proposed inversion influence function based on first-order Taylor requires the added perturbation to be infinitesimal to be accurate. Although this is partially justified through Theorem 3.1, it can be seen from Fig. 1 that the lower bound is violated at larger noise values.
3. For singular Jacobians, a constant needs to be added for numerical stability, which has to be tuned for each specific dataset.
4. From what the reviewer understands, the performed analysis through influence function only considered an adversary who observes the (noisy) gradients and tries to perform inversion by solving the gradient matching optimization problem. In practice, a sophisticated attacker may leverage prior knowledge to improve the reconstruction. The experiments only involved attacks with no/weak prior (e.g., TV), it would be interesting to see how well the proposed metric approximates the reconstruction quality of a strong biased attacker, e.g., using GAN as prior.
5. In the batch recovery case, the upper bound by decomposing into individual gradient inversion might be too loose.
6. Not a weakness but some discussion and comparisons between the proposed lower bound and prior work on lower bounding reconstruction MSE would be nice to have, e.g., [R1] using estimation theory and [R2] using DP-SGD.
[R1] Guo, Chuan, et al. "Bounding training data reconstruction in private (deep) learning." International Conference on Machine Learning. PMLR, 2022.
[R2] Hayes, Jamie, Saeed Mahloujifar, and Borja Balle. "Bounding Training Data Reconstruction in DP-SGD." arXiv preprint arXiv:2302.07225 (2023).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. It can be seen from Fig. 2 that Cifar10 samples induce a larger variance compared to MNIST. Is this an artifact of Cifar10’s data distribution being more complex?
2. What are the variances of the Gaussian noise used to produce Fig. 1 & 2?
3. How much data is needed to tune the epsilon constant? How much impact does the constant have on the results? E.g., how would I^2F perform if an epsilon constant tuned on a different dataset is used?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations and social impact have been discussed in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive comments on our contribution and novelty toward understanding the gradient inversion attack!
**W1:** For complex NN and non-convex loss func, the inversion mapping $G_r$ may be not bijective
**A2:** Thanks for your comments. Indeed $G_r$ may be not bijective but it is not a weakness of our work, since **it does not conflict with our assumption**. We assume a perfect attack given the exact gradient of a sample.
- Note that the sample itself is a solution for Eq(1). If the solution of the inversion attack is unique, then it is equivalent to say the attack optimization method for solving Eq (1) is optimal. Note that the sample of the gradient is a solution for Eq(1). Thus, given the exact gradient of a sample, the attack can exactly recover the sample. Even if the solution is non-unique, we can still essentially assume the attack can attain the sample in the worst case.
**W2:** I2F needs Talyor expansion, which requires perturbation to be infinitesimal.
**A2**: Thanks for the question.
1. Firstly, with a small noise, it is more likely for the attack to recover an image closer to the original one [C]. Thus the privacy risk could be higher, so it is more valuable to bound it.
2. Second, since large noise can destroy the performance [C, D], a small noise is often a common and better choice so we focus more on the privacy risk under a smaller perturbation.
3. Third, we also provide a theoretical validation in Sec 3.2 to show the utility of $I^2F$ under non-infinitesimal noise, where we show I2F is still effective as a lower bound even with large noise.
**W3&Q3:** For singular Jacobians, a constant $\epsilon$ should be tuned. How much impact does the constant have on the results?
**A3:** Thanks for the question.
1. First, **we don’t need to fine-tune the constant $\epsilon$ in our experiments (and also in practice) except in Fig.1**. In Fig.1, it shows the extension for singular Jacobians (known as $I$ (matrix inversion)) can be estimated by $I_{lb}$. Thus we can use $I_{lb}$ to evaluate privacy risk without fine-tuning the $\epsilon$.
2. The impact of the constant $\epsilon$ is evaluated in Fig.2 of the attached PDF. There are two main observations.
1. First, there exists a range of $\epsilon$ where I$^2$F can be used to estimate the privacy risk accurately. So the target of fine-tuning $\epsilon$ is to find an optimal range instead of an optimal value, which makes fine-tuning much easier.
2. Second, in the optimal range of $\epsilon$, $I_{lb}$ is an accurate estimator of I$^2$F, thus the fine-tuning of $\epsilon$ can be avoided.
**W4:** The experiments only involve attacks with no/weak prior (TV loss), more stronger prior should be used.
**A4:** Thanks for the suggestion. We use the TV loss as a prior when attacking LeNet because the attack is strong enough. While when attacking RN18, we use BN statistics as a stronger prior as explained in lines 527-533. Fig.2 shows the evaluation of the effectiveness of I2F with GS attack with BN prior. It indicates that under a stronger attack with BN prior, I2F can still estimate the worst-case privacy risk.
**W5:** Loose bound of batch recovery
**A5:** Thanks for the question. We follow prior work like [E, F] and investigated the privacy risk of a single sample, and therefore the main focus is on the sample-wise attack. The upper bound in “Batch data” in lines 236-240 is an application of sample-wise attack analysis when the attacker can get the per-sample gradient in a batch instead of the average gradient of the batch. Under such a case, our proposed I2F can be applied to the worst-case batch-wise attack. We agree with the reviewer that a batch-wise extension of I2F is very useful and is an interesting future work.
**W6:** Discussion with prior work.
**A6:** Thanks for the suggestion.
- [A] proposed semantic guarantees for DP mechanisms against training data reconstruction and two privacy accounting methods are evaluated based on their guarantees. The difference between [A] and our work lies in:
1. First, [A] discusses the model inversion attack, which is a weaker data reconstruction attack without gradient information compared to gradient inversion.
2. Second, [A] considers the bound of reconstruction error under the differential privacy framework, while our metric aims to measure the privacy risk under a general perturbation on the sample or gradient. Our work can evaluate the privacy risk under more defense mechanisms like gradient pruning or data perturbation.
- [B] proposes an upper bound of the probability of the success of a reconstruction attack and shows that DP parameters are not sufficient to estimate the success of the attack. The difference between [B] and our work lies in:
- We provide a different risk measurement against [B]. [B] captures the privacy risk via the success probability of the reconstruction attack, while our work directly evaluates the reconstruction error under the worst case.
We will include this discussion in the revision.
**Q1:** Larger variance of CIFAR10 in Fig.2 is because .
**A1:** Yes. We agree with you. Since CIFAR10 is more complex than MNIST, the variance of CIFAR10 is larger than that of MNIST.
**Q2:** Noise variance in Fig1 and 2
**A2:** It is $10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$ from light to dark colors.
[A] Guo, Chuan, et al. "Bounding training data reconstruction in private (deep) learning." ICML, 2022
[B] Hayes Jamie et al. "Bounding Training Data Reconstruction in DP-SGD." arXiv 2023
[C] Zhu, Ligeng et al. "Deep leakage from gradients." NeurIPS 2019
[D] Huang, Yangsibo, et al. "Evaluating gradient inversion attacks and defenses in federated learning." NeurIPS 2021
[E] Gao, Wei, et al. "Privacy-preserving collaborative learning with automatic transformation search." CVPR. 2021
[F] Sun, J., et al. "Provable defense against privacy leakage in federated learning from representation perspective. arXiv 2020
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Dear authors,
Thank you for your detailed response. Most of my concerns from the previous round have been addressed. Regarding W4, could you elaborate further on whether the proposed theoretical framework can be extended to take into account the adversary's prior knowledge?
---
Reply to Comment 1.1.1:
Title: Our theorem can be extended with prior knowledge
Comment: We are glad that most of your concerns are addressed.
- **Response to W4:** Yes, our theorem can be extended to take into account the prior knowledge. Consider the inversion optimization problem with prior knowledge as $\min_x L’_I(x;g) = L_I(x;g) + I_C(x)$ where $I_C(x)$ constrains $x$ in the prior space $C$ and $L\_I(x;g) = ||\nabla\_{\theta}L(x;\theta) - g||$ defined in Eq.1.
Then the optimization problem can be rewritten as $\min\_{x \in C} L_I(x;g)$. Thus, as long as the original image $x_0$ is in the feasible region defined by $I_C(x)$, our assumption 3.1 and theorems are also applicable. Intuitively, a good regularization should satisfy the requirement, otherwise, it will unreasonably reject the correct recovery.
---
Reply to Comment 1.1.2:
Title: Follow-up on rebuttal and a kind reminder
Comment: Many thanks for your valuable comments and response. As a follow-up on our rebuttal, we would like to kindly remind you that the close date of the discussion is approaching. Have you gotten a chance to read our responses above, in which we tried our best to address your concern? And we would be more than happy to provide more information or clarification. | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for their patient reading and valuable comments. We are trying our best to address the concerns of all the reviewers. Here we attach a PDF for more empirical results.
Pdf: /pdf/e755cb980c131598ed782d12db4b5dbc1ab35ad0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper aims to understand how private information leaks from gradients in model training.
To this end, the authors propose to use influence analysis to analyze how gradient perturbations can affect the quality of samples reconstructed from private gradients.
Specifically, they first prove that under some assumptions, the privacy leaked from a perturbed gradient can be strictly characterized by a novel Inversion Influence Function (I2F), which is a function that takes the gradient perturbation as input.
With I2F, the authors then gain insights about DGL, such as (1) common random noise (e.g., Gaussian noise) can not provide uniformly fair privacy protection for each gradient, (2) future protection for DGL can be developed based on the eigenvalues of the Jacobian matrices of the model.
Strengths: This paper makes the first step to address an important and fundamental problem, i.e., decoupling the black-box behavior of data gradient leakage. The authors make non-trivial contributions in various aspects:
1. The authors design a novel influence function named I2F based on non-trivial influence analysis. It can strictly characterize the privacy protection provided by gradient perturbation, which gives the first theoretical result about how to analyze gradient privacy leakage.
2. Beyond theoretical analysis, the authors also conduct empirical studies to justify the feasibility of using I2F to study DGL in practice.
3. Based on I2F, the authors found that the singular values of model Jacobian with respect to single inputs could be a good indicator for representing privacy protection strengths, which then illustrates that the gradient privacy protection provided by random noise (which is commonly used in practice) could be unfair to different training data.
4. The paper first establishes the connection between model Jacobian and gradient privacy protection, which suggests that future defense mechanisms can be designed starting from the singular values of model Jacobian.
Weaknesses: 1. The main weakness is that the experiments could be not comprehensive enough to justify the effectiveness of I2F. Specifically:
- Only two datasets, MNIST and CIFAR-10, are involved in the experiments. Furthermore, one of the datasets MNIST could be too simple for illustrating representative results.
- Only a few models are used in the experiments and most of them could be too small.
2. Suggestion: I think it is not necessary to use $\mathcal I_{\mathrm{lb}}$ as a replacement to approximate I2F since the calculation of I2F seems not really costly. Specifically, the main calculation cost would be calculating the matrix inversion of $JJ^T$. Because $JJ^T$ is (assumed to be) positive definite, you can easily use `numpy.linalg.eigh()` to calculate the matrix inverse, which is usually calculation efficient in practice.
======== After Rebuttal ========
Score has been raised based on the authors response.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: None.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: This paper contributes non-trivially to understanding how privacy leaks from gradients.
Although the empirical analysis could be a little weak, I think this work will significantly contribute to the ML security community and therefore should be accepted.
Flag For Ethics Review: ['Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate the affirmation from the reviewer of our contributions and innovations. We are glad to address the concerns as follows:
**W1:** *Only MNIST and CIFAR10 are in experiments. MNIST is a little simple. Also, the models are too small*
**A1**: Thanks for the suggestion. In Fig.5 of the attached PDF, we evaluate our metric on the large model (ResNet152) and large dataset (ImageNet).
1. For larger models, the MSE is no longer a good metric for the recovery evaluation. Even if state-of-the-art attacks are used and the recovered image is visually similar to the original image in Fig 2(b), the two images are measured to be different by MSE, due to the visual shift: The dog head is shifted toward the left side. To capture such shifted similarity, we use LPIPS [1] instead, which measures the semantic distance between two images instead of the pixel-to-pixel distance like MSE.
2. Fig.2(a) shows that I2F is correlated to LPIPS using large models and image scales. This implies that **I2F is a good estimator of recovery similarity even for large models and images**.
**W2:** *No need to approximate I2F with I_{lb}, since we can use numpy.linalg.eigh() to calculate matrix inverse.*
**A2:** A: Thanks for the suggestion. It is a good point that we can use numpy.linalg.eigh() to calculate the matrix inverse. However, for high dimensions of parameters and input images, the main challenge involves the computation of the full Jacobian matrix $J$, besides the inverse. Thus, we propose two alternative approaches in “Efficient Evaluation” to efficiently compute $(JJ^T)^{-1} Jv$ where v is a vector, without explicitly obtaining $JJ^T$.
[A] Zhang, Richard, et al. "The unreasonable effectiveness of deep features as a perceptual metric." CVPR. 2018.
---
Rebuttal Comment 1.1:
Title: Score has been raised
Comment: Thanks to the authors for their response. All of my questions have been resolved. I believe this work will significantly contribute to the ML security community so I have raised my score from 7 to 8 to strongly vote for acceptance.
---
Reply to Comment 1.1.1:
Title: Thanks for raising the score
Comment: We are glad that all your concerns are addressed. Many thanks for reading our responses and lettering us know your thoughts. We really appreciate your valuable comments and acknowledging the contribution of our work! | null | null | null | null | null | null |
A Bounded Ability Estimation for Computerized Adaptive Testing | Accept (poster) | Summary: This is a very interesting paper that proposes a coreset-based algorithm to evaluate students' ability to answer questions from a question bank. The main innovations are defining a true student's probability and proposing a coreset algorithms with pseudo labels.
Strengths: **originality**\
I didn't have direct experience in CAT; however, the application of the coreset finding in efficient ability estimation for CAT is promising.
**quality**\
The proposed algorithm logically makes sense to me, and the English writing is generally clear enough.
**clarity**\
The organization of the paper is good, which helps me understand the main content quickly.
**significance**\
I am a bit concerned about the ethical side of the work. The corset finding is dependent on the test takers, which means different test takers would end up with different questions assigned. Although the approximation error to the true estimation is claimed and given, I think it may lead to the fairness concern of whether giving different questions to different test takers is fair.
Weaknesses: I have multiple concerns regarding the mathematic rigor in this paper.
1. In proposition 1, the definition of question bank $Q$ is missing. Hence, how "questions" or samples are generated to increase $|Q|$ is unclear. In addition, I think the proof of Proposition 1 should add $\forall \epsilon>0$.
2. Definition 2 seems correct. However, I think it is nontrivial to prove that optimizing a loss function with gradients equivalent to the original loss function is equivalent to optimizing the original loss function.
3. The upper bound for (4) is skipped in line 148. It would be clearer to list the upper bound of the loss first and then give the solution (coreset) of this upper bound.
4. lemma 1 actually does not say much useful for the approximation. If you want to say the algorithm is to approximately solve the problem, then an approximation error to measure the distance between the original function and the function approximation should be explicitly provided.
5. The expectation writing in line 188 is problematic. What distribution does $Y$ come from?
6. Why the ''probability distance'' $H_p(\theta^t, \theta^*)$ in (10) is not zero when $\theta^t=\theta^*$?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Can you provide the variance of your experimental results?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Please see the weakness section.
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback on our CAT paper. We appreciate the time and effort you have invested in reviewing our work.
For your concerns regarding the fairness of CAT itself, first of all, let me introduce the background of CAT: as stated in Section 1, CAT is a *personalized* question selection system, which can adaptively select suitable items for individuals, assessing their ability accurately and efficiently. The typical application of CAT is GRE exam: when a test-taker begins the GRE, the computer provides a question of medium difficulty. If the response is correct, the computer presents a slightly more challenging question. Conversely, if the response is incorrect, the following question will be slightly easier. That's why some candidates may perceive the test becoming progressively more difficult.
Therefore, similar to the recommendation system (different users will be recommended different items), *CAT also has fairness issues of course*. But this is another interesting topic, which is not the focus of this paper. Following your and Ethics Reviewer xQDC's suggestion, in an improved version, we will give a brief overview of fairness issues in CAT itself. The following are the responses to your other questions:
> **Q1**: In proposition 1, the definition of question bank $Q$ is missing. Hence, how "questions" or samples are generated to increase $|Q|$ is unclear. In addition, I think the proof of Proposition 1 should add $\forall \epsilon >0$.
**A1**: The question bank $Q$ is defined in the beginning of Section 2. Proposition 1 tries to explain that as the bank size $|Q|$ increases, the approximation becomes more ideal, and increasing the question bank is obviously achievable in real scenarios. Regarding $\epsilon$, generally in the mathematical theory of CAT, $\epsilon$ is greater than 0 by default. Sorry to confuse you, we will make the derivation more detailed in the follow-up.
> **Q2**: Definition 2 seems correct. However, I think it is nontrivial to prove that optimizing a loss function with gradients equivalent to the original loss function is equivalent to optimizing the original loss function.
**A2**: The transformation in Definition 2 is intuitive and consistent with Coreset method: in an optimization problem, if you use different dataset for standard gradient descent, as long as their gradient sum ($\sum_{j\in S}{\gamma_j \nabla l_j(\theta)}$ and $\sum_{i\in Q}{\nabla l_i(\theta)}$) and initial parameter point are guaranteed to be the same, then the final optimization result (${\theta^T}$ and $\theta^*$) will naturally be the same. If you have any other questions, please feel free to ask.
> **Q3**: The upper bound for (4) is skipped in line 148. It would be clearer to list the upper bound of the loss first and then give the solution (coreset) of this upper bound.
**A3**: Since this part is based on previous Coreset research (cited in the paper), and considering the space limit, this part of the derivation is skipped. In the subsequent revisions of the paper, we plan to provide a more detailed derivation of it in appendix.
> **Q4**: Lemma 1 actually does not say much useful for the approximation...
**A4**: The role of Lemma 1 is to tell readers 1) what is the essential optimization goal of the expected gradient difference method we proposed. Because, in CAT scenarios, we cannot get the labels of all samples like the traditional coreset problem. 2) It is an important theoretical basis of Theorem 1 (line 188 in main paper). Like the previous CAT optimization works [1][2], the approximation of the problem itself is unable to provide an approximation error. Fortunately, the ultimate goal of the paper is to obtain a theoretically guaranteed ability estimate (i.e., Theorem 1), which provide detailed analysis of approximation error in appendix.
> **Q5**: What distribution does $y$ come from?
**A5**: As stated on line 173, "the normed gradient difference is calculated as an expectation $\mathbb{E}_{y \sim p _{\theta^t }}$ over the possible labelings, since student's response labels $y$ to the candidate questions are unknown in the selection step.". The distribution $y$ determined by the current estimate $\theta^t$. For clarity, we will hightlight this part in the next version.
> **Q6**: Why the ''probability distance'' $H_p(\theta^{t},\theta^*)$ in (10) is not zero when $\theta^{t}=\theta^*$?
**A6**: $H_p(\theta^{t},\theta^*)=E_{(q,y)\sim p_{\theta^t}} [1/p_{\theta^*}(q,y)]$ in the upper bound measures the distance between $p_{\theta^t}$ and $p_{\theta^*}$, but it is different from the traditional probability distance (such as KL): The probability $p_{\theta}\in[0,1] \to 1/p_{\theta} >=1$ which determines that the minimum value of $H_p$ cannot be 0 (its theoretical minimum value is 1. More details about $H_p$ can be found in Section C of Supplementary Material (it can be assumed that $p_{\theta^t}$ is a delta function)
> **Q7**: Can you provide the variance of your experimental results?
**A7**: Sure. In our experiments, we repeated 10 times under different seeds. For deep learning methods (BOBCAT and NCAT), we trained 10 models under different seeds. Due to the space limit in this Rebuttal editor pane, we provide the variance results in the above global Rebuttal pane (pdf) for every reviewers to check.
Reference:
[1] Ghosh, Aritra, and Andrew Lan. "BOBCAT: Bilevel Optimization-Based Computerized Adaptive Testing." International Joint Conference on Artificial Intelligence. 2021
[2] Zhuang, Yan, et al. "Fully adaptive framework: Neural computerized adaptive testing for online education." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 4. 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply.
**Regarding Q1 and my general opinion**\
I think my point was to ask adding $\forall \epsilon>0$, which is standard in a convergence proof. I think you got the point and this concern is relieved. Regarding the definition of $Q$, I meant there is no statistical definition of that, e.g., the distribution of $q\in Q$ and the true answers of $q$. Without making an assumption regarding the data generation process (DGP) of $q\in Q$, for sure the approximation error in Proposition 1 is inaccessible. I would suggest the author make reasonable assumptions that fit CAT about the DGP of $q$, and that gives the problem formation more structurality and hence deriving more rigorous results. I appreciate the authors' efforts in teaching me the background of CAT, and my ethical concern is dismissed. This is generally an interesting and insightful paper from my perspective and some insufficient mathematical rigor from the point of statistics is secondary to the main contribution, hence I am happy to increase my score to borderline accept.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you for your response. We are appreciative of your thoughtful feedback and the time you've taken to reconsider the paper. Your suggestion to introduce reasonable DGP assumptions aligned with CAT is insightful. I see how this can contribute to a more structured problem formulation and lead to more robust outcomes.
I'm glad that the paper's background on CAT is helpful and address ethical concerns. Your positive assessment of the paper as both interesting and insightful is truly motivating. We're committed to making the necessary improvements based on your feedback to enhance the overall quality of the paper. | Summary: This paper proposes a method for computerized adaptive testing by selecting questions that have similar gradients in their expected responses to other questions in the question bank. Results show that this method works well on real-world datasets.
Strengths: The proposed method seems sound. Experimental results suggest that it is effective
Weaknesses: - The proposed method is largely unsurprising and I do not find it to be significantly different than existing active learning methods, except for a different application
- The performance improvement compared to existing methods is minimal
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can you clarify the significance of your theoretical analysis? They seem to be quite standard for submodular optimization. Strongly convex loss functions may not be feasible, as the authors mentioned, for more complex student models other than IRT. I also find the assumptions in Theorem 2, i.e., the minimized loss is 0, to be achievable. I the only case when that happens is the estimated ability parameter going to infinity (BCE minimization without regularization, so the MLE does not exist)?
- In my personal opinion, minimizing the gradient difference with other questions to select a most representative question is not that different than doing a training-test split, which does not make the proposed method significant different from existing work on using meta learning for CAT. Is the only reason about submodularity giving you a proof?
- Can you discuss what happens if the question bank is skewed in question parameters? In that case an approximation, Section 3.1, may not work?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and questions regarding our manuscript. Please feel free to share any further insights or suggestions you might have.
> **Q1**: The proposed method is largely unsurprising and I do not find it to be significantly different than existing active learning methods.
**A1**: In my opinion, regardless of CAT, Active Learning, Coreset, and Data Distillation, they are all sample selection or generation strategies (using the fewest samples to obtain the maximum benefit). Technically, their boundaries are not so clear. For example, the classic FSI in CAT selects a question whose difficulty is close to the ability (50% probability of correct answer), which is consistent with the uncertainty method in active learning. Recently, some researchers apply active learning to CAT [1]. This is an open-ended question, and we welcome any ideas you may have for discussion.
> **Q2**: The performance improvement is minimal.
**A2**: Compared to deep learning based methods, Yes, the improvement of our method BECAT at the beginning of the exam is small (line 265). Because the method based on deep learning requires pre-training selection algorithms on large-scale response data. However, information-based methods do not require training, thus it is difficult to catch up with the deep learning method. Considering the overhead and the bias from dataset, we still choose information-based paradigm. In the future, we will try to adapt the proposed explicit algorithm to data-driven frameworks.
> **Q3**: Can you clarify the significance of your theoretical analysis?
**A3**: Sure. The focus of this paper is: how to design a reasonable question selection algorithm, accurately estimating ability, when ground-truth ability is unknown. The significance of our theoretical analysis stems from our approach to problem identification, approximation and transformation, wherein we reframe the original problem into submodular optimization. Furthermore, from a technical standpoint, our work builds upon submodular and propose a novel similarity measurement method ($\widetilde{w}(i,j)$), which can be practically applied to CAT scenarios, and provide theoretical analysis (estimation error upper-bound).
> **Q4**: Strongly convex loss functions may not be feasible, as the authors mentioned, ...other than IRT.
**A4**: Yes, it is almost impossible to achieve strong convexity for a more complex user model. Luckily, IRT is the most widely used and most interpretable model in CAT. Recently, researchers use more complex cognitive diagnostic models (e.g. NeuralCDM) in CAT system. However, like related theory research, we cannot prove Theorem's effectiveness on a black-box neural network, so we only evaluate them in experiments as stated in line 200.
> **Q5**: I also find the assumptions in Theorem 2, i.e., the minimized loss is 0, to be achievable when that happens is the estimated ability parameter going to infinity?
**A5**: For all responses correct or all incorrect, no finite ML/BCE estimates exist [2] just as you said. Generally: given 2PL-IRT model: $p_j(\theta) = sigmoid(a_j(\theta-b_j))$, if a student correctly answers question A (with difficulty $b= 1.0$), but wrong answers question B (with difficulty $b= 3.0$), it seems that the ability estimate $\hat{\theta} = 2.0$ is a good choice. Moreover, the discrimination parameter $\alpha$ will also affect such estimation: for a question with high $\alpha$, the ability slightly greater than difficulty can make the probability approach 1 (loss=0). Then again, loss is 0 in Theorem 2 is relatively ideal. But its purpose is to illustrate that $H_p$ is small in upper bound (line 207). Therefore, we further provide experimental results in Appendix E.3, and it is found that $H_p$ can be maintained at a relatively small value.
> **Q6**: In my personal opinion, minimizing the gradient difference...from existing work on using meta learning for CAT. Is the only reason about submodularity giving you a proof?
**A6**: We understand your concern and we would like to emphasize that our method leverages the concept of minimizing gradient differences to identify a question that maximizes **information gain**. Moreover, compared with meta learning, this greedy question selection algorithm does not need to be pre-trained with large-scale response data. Regarding your question about submodularity, it is indeed an important factor in providing a rigorous proof. No theoretical guarantee is unacceptable for real-world standardized tests, which is why we propose an explicit selection algorithm (line99). But our contributions extend beyond this aspect, and the submodularity in the context of adaptive testing is novel and has the potential to advance the field by ensuring the optimality of question selection.
> **Q7**: Can you discuss what happens if the question bank is skewed in question parameters? In that case an approximation, may not work?
**A7**: Item parameter estimation needs to be pre-calibrated before testing, which is crucial in CAT system, regardless of the selection algorithm. For example, when the difficulty parameter of a difficult question is wrongly estimated to be small, it will undoubtedly bring challenges to low-ability student. We acknowledge that if the question bank exhibits significant skewness, the approximation might face challenges in accurately estimating ability. To address this concern, we plan to extend our discussions and experiments to include a thorough analysis when dealing with skewed question parameter distributions.
Reference
[1] Bi, Haoyang, et al. "Quality meets diversity: A model-agnostic framework for computerized adaptive testing." 2020 IEEE International Conference on Data Mining (ICDM). IEEE, 2020.
[2] Van der Linden, Wim J., and Peter J. Pashley. "Item selection and ability estimation in adaptive testing." Elements of adaptive testing. New York: Springer New York, 2009. 3-30.
---
Rebuttal Comment 1.1:
Comment: thank you for a detailed response which clarified things a little bit. my opinion has not really changed; the paper seems technically solid, although I personally did not get many insights out of it.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: We understand that the insights you were looking for might not have been fully met by our work. We value your perspective and will consider your feedback as we continue to improve our research. If you have any further questions or suggestions, we would be eager to hear them.
Once again, thanks for your engagement with our paper. | Summary: This paper tries to answer a question in computerized adaptive testing: how to select a question suitable for student without knowing the ground-truth of his/her true ability. To this end, the authors find the theoretical approximation of the true ability and provide theoretical and experimental analysis to support their proposition. They further develop an expected gradient difference approximation to design a greedy selection algorithm, successfully bounding the estimation error. Extensive experimental results show the estimation efficiency and accuracy of the method compared with baseline systems.
Strengths: S1. An explicit question selection algorithm is proposed, which theoretically solves the dilemma that previous work can only approximate the truth ability implicitly. The solution is interesting: it finds a theoretical approximation of the true that is easily overlooked: the ability estimated by full responses to question bank. Moreover, this paper provides a convincing analysis of the plausibility of the approximation, both experimentally and theoretically.
S2. In terms of technical implementation, this paper improves the Coreset methods and propose an expected gradient difference approximation for CAT scenario. The authors further design a simple but effective greedy selection algorithm, and prove the error upper-bound of the ability estimation on questions found by it.
S3. Experiments on both real-world and synthetic datasets, show that it can reach the same estimation accuracy using 15% less questions on average, reducing test length.
S4. This paper is easy to follow. The additional material covers the proof and almost all additional experiments, including the detailed experimental analysis of Theorem in the paper.
Weaknesses: 1. The expected gradient difference approximation proposed in this paper is general in my opinion, and should not be limited to the CAT scenario. The sample selection strategy design or the efficient learning in the parameter estimation scenario seems to be applicable. I suggest the authors to study the effectiveness of the method in more tasks/domains.
2. As stated in the paper: "BECAT cannot surpass all other methods at the beginning of the exam". This is a cold start topic in an educational recommendation/testing. The reason is not explained in detail. Why the proposed method cannot surpass the RL/Meta learning methods to solve this cold start problem.
3. I have some questions about Theorem 2: What does the H_p function measure? What is the relationship between it and the target in this paper (true ability) and the error upper bound in Theorem 1?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The limitations are clearly stated in this paper and its supplementary material. It seems longer to train than traditional methods, but it adopts two speed-up tricks to improve the complexity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your thoughtful insights and suggestions regarding the generality of the expected gradient difference approximation proposed in our paper. Your perspective on the broader applicability of this approach is indeed intriguing and aligns well with our intentions.
> **Q1**: The expected gradient difference approximation proposed in this paper is general in my opinion, and should not be limited to the CAT scenario.... I suggest the authors to study the effectiveness of the method in more tasks/domains.
**A1**: We agree with your viewpoint that the concept of minimizing gradient differences has the potential to extend beyond the realm of Computerized Adaptive Testing. The proposed sample selection strategy design and efficient learning techniques for parameter estimation scenarios hold promise for a wide range of applications. In light of your recommendation, we are excited to investigate the feasibility and effectiveness of our approach in diverse contexts.
> **Q2**: As stated in the paper: "BECAT cannot surpass ... Why the proposed method cannot surpass the RL/Meta learning methods to solve this cold start problem.
**A2**: Thank you for your valuable feedback on our paper. The observed limitation in BECAT's early performance can be attributed to this key factor: CAT begins with no information about the student's abilities. Unless a fixed strategy is employed, the first question is usually chosen randomly, which can undoubtedly influence the information-based algorithms, like FSI, KLI and our BECAT. In contrast, RL/Meta learning methods have the potential to alleviate this challenge to a certain extent due to their ability to discern patterns by pre-training on large response dataset. Considering the training overhead in practice and the bias from dataset, we still choose information-based paradigm in this paper. By addressing this aspect, we aim to provide a more comprehensive analysis of BECAT's performance in different phases of an exam.
> **Q3**: I have some questions about Theorem 2: What does the $H_p$ function measure? What is the relationship between it and the target in this paper (true ability) and the error upper bound in Theorem 1?
**A3**: As stated in line 202 of the paper, $H_p(\theta^{t},\theta^*)$ can be regarded as a type of statistical distance: measuring how probability distribution $p_{\theta^t}$ is different from $p_{\theta^*}$. Moreover, with the help of the consistency estimation (i.e., binary cross-entropy) at each step, $H_p(\theta^t,\theta^*)$ can reach its theoretical minimum. Theorem 1 suggests that, to minimize the expected error bound, the CAT systems should try to minimize $H_p$. In this case, the estimate error upper bound in Theorem 1 can be as small as possible. In addition, we further provide experimental results in Appendix E.3, and it is found that $H_p$ can be maintained at a relatively small value.
---
Rebuttal Comment 1.1:
Title: Thanks for the authors' responses
Comment: Thank you for the authors' detailed responses.
Most of my concerns have been thoroughly addressed. In particular, the explanations regarding Theorems 1 and 2, especially their connection to the overall objectives, are now clear to me.
I would appreciate additional insight on this. Additionally, I'm interested in the practical implementation of your proposed method. Are there specific considerations concerning computational resources or model complexity that users should note?
I would like to hear about it in some details.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment:
Thank you for your positive feedback. We appreciate your recognition of our efforts to address your concerns and provide clarity on the key aspects of our work.
Thank you for your follow-up questions. As described in Appendix D, these two tricks successfully reduce our method from $O(|Q||\Theta|)$ to $O(|Q|m)$, $m \ll |\Theta|$, and greatly reduce the calculation frequency in selection (Figure 3 in Appendix shows the improved algorithm efficiency). Besides, the proposed selection method requires no additional training, not even GPU resources, compared to recent deep learning methods.
If you have any other questions, please feel free to ask. | Summary: This paper investigates the problem of effective question selection in Computerized Adaptive Testing (CAT), with the goal of designing a procedure that minimizes test length while maximizing estimation accuracy of student ability. While student ability doesn’t have a known ground-truth, they use the student ability estimated using all questions (as opposed to a small subset) as the effective ground truth. To design a short test, they use literature on core set selection to pick a subset of questions that have the same gradient updates as if using the entire question bank. This scheme is used in some real world experiments, demonstrating gains in shorter test lengths and better estimation accuracy.
Strengths: I generally like the results in this paper and I believe the CAT community would find this to be a valuable contribution.
- As far as I am aware, the ideas in this paper are novel. It proposes a simple, computationally inexpensive approach to question selection. The work nicely combines existing literature in diverse areas of AI to solve a well-defined problem.
- The results in this paper are compelling and compared against a thorough set of competitive models from recent literature. The paper’s approach is much more computationally simple than that of these competitors, and seems to empirically outperform them.
Weaknesses: My biggest critique of this paper has to do with its writing and presentation, which I would say needs a lot of work. While I find the results interesting, the presented methodology is hard to follow and filled with errors and typos. Just a few examples:
- What does the ⇒ in equation 5 mean? This is not a gramatically well-formed mathematical sentence.
- In equation 5, what is $d$? I don’t think this is defined anywhere
- In the simulation experiment, the paper simulates student abilities “using the smallest EXAM dataset”. What does that mean exactly? Do you also simulate the question parameters or estimate those first? I think this is explained in appendix E, but very confusing when you just read the main paper. Relatedly, in task 2, do you mean $\theta^*$ instead of $\theta_0$ (which is unknown)
Related to the writing, I found the math exposition in this paper to be extremely difficult to follow. Many of the steps are explained poorly, making them hard to verify (e.g. Lemma 1 in Appendix A).
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: For the ability estimated by the responses to entire question bank to actually converge to the true ability (as shown by your synthetic experiments in Fig 2.) I believe you need the ability parameter to be static over time. If i remember correctly, the EEDI data is collected over a long period of time, so students likely (hopefully) are learning over time. Is there any way to address this issue?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: Right now it sounds like one of the paper’s contributions is the idea of using the entire set of questions to represent the unknown student ability. However, this is something also explored in the BOBCAT paper. I would recommend citing them and carving out your contribution on top of this. For ex. “we are the first to propose a selection algorithm that explicitly targets this full sample estimate”.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for your high appreciation of the contribution and novelty presented in our paper. Your positive feedback means a lot to us. We also appreciate your valuable suggestions regarding the refinement of certain technical aspects in the paper to enhance clarity. Following your advice, we will further emphasize the difference from other literature (e.g., BOBCAT), and on top of it, explicitly highlight our contribution in the paper.
Here are the responses to each of your questions:
> **Q1**: What does the "$\Rightarrow$" in equation 5 mean? This is not a gramatically well-formed mathematical sentence.
**A2**: The "$\Rightarrow$" is used because we cannot directly solve the original problem in Equation 5 and need some approximation to it. For rigor, we use "$\Rightarrow$" to indicate such transformations. Sorry to cause confusion to you, we will change it in the next version.
> **Q2**: In equation 5, what is $d$? I don’t think this is defined anywhere.
**A2**: Sorry for missing this definition in the main paper: $d=\max_{i\in Q,j\in S}\max_{\theta\in \Theta} {\Vert \nabla l_i(\theta) - \nabla l_{j}(\theta)\Vert}$ is the maximum pairwise gradient distance.
> **Q3**: In the simulation experiment, ..."using the smallest EXAM dataset". What does that mean exactly? Do you also simulate the question parameters or estimate those first?
**A3**: Since the bank size $|Q|$ can impact Proposition 1 (the larger the $Q$, the better the approximation), we selected the smallest dataset EXAM (with smallest $Q$) to verify this proposition. We use the training set of real data to estimate the item parameters and fix them in the experiments. Due to space limits, we simplify this part in the main paper.
> **Q4**: In task 2, do you mean $\theta^*$ instead of $\theta_0$ (which is unknown).
**A4**: Task 2 is a simulation experiment, which needs to manually generate some $\theta_0$s. In order to make these $\theta_0$s conform to the students' ability distribution, we use all the students' responses in the dataset to estimate their ability $\{\theta_0^1,\theta_0^2, ...,\theta_0^N\}$ as the ground truth $\theta_0$ like [1][2].
> **Q5**: Many of the steps are explained poorly, making them hard to verify (e.g. Lemma 1 in Appendix A).
**A5**: Some steps are skipped is mainly because some of these proof steps (e.g., Lemma 1) are based on the other papers, which have been cited in the proofs. However, the proofs of the Theorem originally proposed in our paper such as Theorem 1 and 2 are detailed.
Thanks for your suggestion, we promise to provide a detailed proof of Lemma 1 in an improved version of the manuscript. (*Detailed proofs of Lemma 1 can now be found in the Global Rebuttal pane, for all reviewers to check.*)
> **Q6**: For the ability estimated by the responses... I believe you need the ability parameter to be static over time. ...so students likely are learning over time. Is there any way to address this issue?
**A6**: In fact, as you said, student abilities that are not static in most public datasets. However, these datasets look like "session-based", i.e., the time of response behavior in a session is compact (minutes or hours), and the interval between sessions is large (days or even months). In a session, students' ability will not change much, so a feasible method is: divide a student response sequence into different sections according to the time feature/session, and treat each section as a completely different student. In addition, we collected a student exam dataset EXAM in our paper, which collected the records of junior high school students on mathematical exams. We will make this dataset publicly available after the paper is accepted.
Reference:
[1] Bi, Haoyang, et al. "Quality meets diversity: A model-agnostic framework for computerized adaptive testing." 2020 IEEE International Conference on Data Mining (ICDM). IEEE, 2020.
[2] Cheng, Ying. "When cognitive diagnosis meets computerized adaptive testing: CD-CAT." Psychometrika 74 (2009): 619-632.
---
Rebuttal Comment 1.1:
Comment: Thanks for your answers, they were quite helpful.
As mentioned in my initial review, I like this paper and think it would be a valuable contribution to the CAT community. I strongly urge the authors to spend a lot more time or get external advice on the writing and presentation.
I will maintain my initial rating of a weak accept because I'm not sure if NeurIPS is necessarily the right place for this paper to reach the relevant audiences and I still have concerns about writing and clarity.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you for your time and valuable insights. Your acknowledgment means a lot to us, and we are honored to contribute meaningfully to the CAT community through our paper. Thank you for your suggestions about presentation, and we will further optimize the paper, making it accessible to general NeurIPS readers in other fields. This is also important for broadening the impact of CAT field itself in NeurIPS and beyond. | Rebuttal 1:
Rebuttal: We appreciate all the reviewers' thorough assessment and valuable feedbacks. Their thoughtful evaluation has provided valuable insights that have significantly contributed to the improvement of the manuscript. The reviewers' positive comments encompassing different dimensions are truly encouraging:
- **Contribution**:
"CAT community would find this to be a valuable contribution" (KCbW);
"the ideas in this paper are novel" (KCbW);
"This is a very interesting paper/solution is interesting" (TJRB, DatR);
"The author study an important problem" (aSHy);
- **Presentation**:
"This paper is easy to follow" (TJRB);
"The proposed algorithm logically makes sense to me, and the English writing is generally clear enough" (DatR);
"The organization of the paper is good, which helps me understand the main content quickly" (DatR);
- **Method**:
"It proposes a simple, computationally inexpensive approach" (KCbW);
"this paper provides a convincing analysis of the plausibility of the approximation" (TJRB);
"coreset finding in efficient ability estimation for CAT is promising" (DatR)
- **Experimental Results**:
"their method perform superior to other CAT methods at reducing test length" (aSHy);
"The results are compelling and compared against a thorough set of competitive models" (KCbW);
"this method works well on real-world datasets" (GvrK)
Regarding the questions raised by each reviewer, we have carefully considered each point and have made detailed responses in the local rebuttal pane.
---
**[Response to Ethics Reviewers]**
We deeply appreciate your thoughtful consideration of the ethical implications associated with CAT itself. Like recommendation system (different users will be recommended different items), CAT is also a **personalized** question in Education and has fairness issues of course, which is another interesting topic. You point out that the use of adaptive methods within the standardized testing is a topic of ongoing debate, with concerns about bias in original data and its potential amplification during the estimation process. We acknowledge the importance of addressing these concerns to ensure fairness and equity in educational assessments.
While our paper aimed to present a novel method for improving the **efficiency** of CAT, your feedback highlights the necessity of addressing the potential ethical pitfalls. In line with your recommendation, we will revise the paper to incorporate a dedicated subsection that discusses potential sources of bias in CAT itself and the datasets used.
***
Several steps of Lemma 1 are skipped in the paper, because these steps are based on the other papers (which have been cited in the original proofs). Thanks for the suggestion from Reviewer KCbW, we now provide detailed proof of Lemma 1 for all reviewers to check:
**[Detailed proof of Lemma 1]**
> *Lemma1*: ...The corresponding designed selection algorithm using submodular function $\widetilde{F}$ is actually approximately solving the following optimization problem: $\min\limits_{|S|=T} \max\limits_{\theta\in \Theta} \mathbb{E}_y [\Vert \sum _{j\in S}{\gamma_j \nabla l_j(\theta)} - \sum _{i\in Q}{\nabla l_i(\theta)} \Vert ]$
*Proof*: We first define a mapping function $h$ from set $Q$ to $S$ to a mapping function:$\forall i \in Q, h(i)\in S$. It assign every response data point $i \in Q$ to one of the elements $j$ in $S$. Then, for any arbitrary ability parameter $\theta \in \Theta$ we can write
$$ \sum_{i\in Q}{\nabla l_i(\theta)} =\sum_{i\in Q}{[\nabla l_i(\theta)- \nabla l_{h(i)}(\theta) + \nabla l_{h(i)}(\theta)]}
=\sum_{i\in Q}{[\nabla l_i(\theta) - \nabla l_{h(i)}(\theta)]} + \sum_{j\in S}{\gamma_j \nabla l_j(\theta)}$$
Subtracting and taking the expected norm of the both sides, we get an upper bound on the error. According to the triangle inequality, we have $$\mathbb{E}\left[\Vert \sum_{i\in Q}{\nabla l_i(\theta)} -\sum_{i\in S}{\gamma_j \nabla l_j(\theta)} \Vert \right]\le \sum_{i\in Q}{\mathbb{E}\left[\Vert \nabla l_i(\theta) - \nabla l_{h'(i)}(\theta)\Vert \right]}.$$
When the mapping function $h$ is to map each element in $Q$ to the one in $S$ that is closest to its expected gradient, the right side of inequality is minimized, or minimum expected distance between the gradient: $h(i)= \arg\min_{j\in S} \mathbb{E}\left[\Vert \nabla l_i(\theta) - \nabla l_j(\theta)\Vert \right]$. Therefore, the upper bound of the expected gradient difference can be further constrained:
$$\min_{|S|=T}\mathbb{E}\left[\Vert \sum_{i\in Q}{\nabla l_i(\theta)} -\sum_{i\in S}{\gamma_j \nabla l_j(\theta)} \Vert \right]\le \sum_{i\in Q}{\mathop{\mathrm{min}}\limits_{j\in S}\mathbb{E}\left[\Vert \nabla l_i(\theta) - \nabla l_{j}(\theta)\Vert \right]}.$$
Next, define a similarity function $\widetilde{w}(i,j)$ which measures the expected gradient similarity between response pair $i$ and $j$: $ \widetilde{w}(i,j)=d-\max_{\theta\in \Theta} {\mathbb{E}\left[\Vert \nabla l_i(\theta) - \nabla l_{j}(\theta)\Vert \right]} $, and $d=\max_{i\in Q,j\in S}\max_{\theta\in \Theta} {\Vert \nabla l_i(\theta) - \nabla l_{j}(\theta)\Vert}$ is the maximum pairwise gradient distance. Thus, the optimization problem can also be transformed as:
$$\max\limits_{|S|=T} \sum_{i\in Q}{\mathop{\mathrm{max}}\limits_{j\in S} \widetilde{w}(i,j)}.$$
Following the same way of origin problem, its corresponding submodular $\widetilde{F}(S)=\sum_{i\in Q}{\max_{j\in S} \widetilde{w}(i,j)}$, which is the same with our proposed method. Thus, the designed selection algorithm is the greedy algorithm of the optimization problem.
***
**[Variance results]**
Pdf is the variance result asked by Reviewer DatR, for every reviewers to check.
Pdf: /pdf/0c59b6a28833f4ad38a55b17672c10e3a6b1ec3c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors advocate to propose a method to better estimate students ability by using as few questions as possible. They redefine Computer Adaptive Testing(CAT) as a adaptive subset selection of question to estimate students ability and propose a gradient based selection method to select items that minimizes the estimation error term. Through experiment results they show that their proposed method BECAT outperforms existing CAT method at reducing test length by 10-20%.
Strengths: The author study an important problem of how to efficiently estimate the ability of student by adaptively asking them as small number of questions as possible. They propose a gradient based method to solve this problem and their empirical results suggest that their method perform superior to other CAT methods at reducing test length. Empirical results suggest that their method perform superior to other CAT methods at reducing test length.
Weaknesses: 1. The paper is poorly written and is not easy to follow.
2. Important concepts and definitions like student’s “ability”, model of the student is not well defined. For more details refer to the question section.
3. Even through the paper studies an important problem, the overall presentation and readability of the paper can be significantly improved. At this point I cannot propose to accept this paper.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. “For accurate and efficient assessment” : how is ground truth “ability” $\theta_0$ defined mathematically? what is $\Theta$ set mathematically?
2. How can the ability be estimated by minimizing the empirical loss? Can the authors provide a concrete example?
3. In line 82-83, what is “standard” gradient descent and under what ability space $\Theta$ the gradient descent can minimize empirical risk
4. The student’s ability is defined to be fixed $\theta_0$ in line 75, and in line 90 student current ability is said to be $\theta^t$ which is an evolving parameter through time. How are these two definitions consistent?
5. “there is no such ground truth $\theta_0$ in the dataset” what does a parameter being in a dataset even mean?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The cognitive model of student is over simplied.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We understand that there might be room for improvement in terms of clarity. Regarding the questions you raised, we have carefully considered each point and have made following responses:
> **Q1**: How is ground truth "ability" $\theta_0$ and $\Theta$ defined mathematically? Student’s “ability”, model of the student is not well defined.
**A1**: In Item Response Theory (IRT), the student's ability parameter $\theta$ refers to the latent trait being measured. It represents the underlying ability of the student that is not directly observable but influences their performance on the test items [1]. Therefore, $\theta_0$ is a student's true ability parameter, which can be estimated by student's performance on the test items. As stated in line 36, you can simply think of it as a parameter estimation problem: $\theta_0$ is the unknown truth value to be estimated, $\Theta$ is the entire parameter space, and the student's performance is the observation $X$.
> **Q2**: How can the ability be estimated by minimizing the empirical loss? Can the authors provide a concrete example?
**A2**: In Item Response Theory (IRT), the ability of a student can be estimated by minimizing the empirical loss when using a probabilistic model such as the logistic model for binary responses. Minimizing the empirical loss involves finding the ability parameter $\theta$ that best fits the observed response data [2]. To illustrate with a concrete example (line 300), let's consider a simplified version of the 1-parameter logistic (1PL) IRT model for binary responses. In this model, the probability of a correct response to item $i$ for a student with ability $\theta$ is given by:
$
P(\text{Correct response to item } i) = \frac{1}{1 + e^{-(\theta - b_i)}}.
$
where $b_i$ is the difficulty parameter for item $i$ (the ability level at which the item has a 50% chance of being answered correctly). Now, assume we have a dataset with binary responses (0 for incorrect, 1 for correct) for a set of items administered to multiple students. The empirical loss function, such as Binary Cross Entropy (BCE) loss, can be used to estimate the ability parameter $\theta$. We can use optimization techniques like gradient descent or other numerical methods to find the minimum of the empirical loss function. The estimated $\theta$ will be the ability parameter that best fits the observed data based on the IRT model.
> **Q3**: In line 82-83, what is “standard” gradient descent?
**A3**: Standard Gradient Descent, also known as Batch or Deterministic Gradient Descent, is an optimization algorithm that use the **entire** training set. It process all the training examples simultaneously in a large batch [3]. This terminology can be somewhat confusing because the word “batch” is also often used to describe the minibatch used by minibatch stochastic gradient descent. Typically the term “standard/batch gradient descent” implies the use of the full training set, while the use of the term “batch” to describe a group of examples does not. Therefore, to prevent unnecessary misunderstandings caused by "batch" to readers, we use "standard" instead in this paper.
> **Q4**: Under what ability space $\Theta$ the gradient descent can minimize empirical risk?
**A4**: From the perspective of optimization, it is not fundamentally different from general gradient descent problems, as it requires finding an optimal value $\theta^*$ in the whole parameter space $\Theta$.
> **Q5**: The student’s ability is defined to be fixed $\theta_0$ in line 75, and in line 90 student current ability is said to be $\theta^t$ which is an evolving parameter through time. How are these two definitions consistent?
**A5**: Since students do not receive correctness feedback during the test, the true value of the student's ability, $\theta_0$, is **constant** and **unchanged** throughout the test. As mentioned in Section 1 of the paper, current ability $\theta^t$ is an "estimate", and CAT needs to sequentially use student responses to keep this estimate $\theta^t$ close to $\theta_0$. Therefore, CAT is essentially an "online" parameter estimation problem, and our contribution is to design a reasonable question selection strategy for efficient estimation, when the groundtruth $\theta_0$ is unknown.
> **Q6**: "there is no such ground truth $\theta_0$ in the dataset" what does a parameter being in a dataset even mean?
**A6**: As stated in line 33 of the paper, CAT hopes that the question selection algorithm we designed can select the fewest suitable items to make the ability estimation more accurate. In essence, it is trying to solve the optimization problem in Definition 1: $\mathop{\mathrm{min}}\limits_{|S|=T} \Vert {\theta^T}-\theta_0 \Vert$. However, this optimization problem **cannot** be solved explicitly due to the unknown of ground-truth $\theta_0$ in the dataset. This is why we use $\theta^*$ to approximate $\theta_0$ and reformulate such problem (i.e., Definition 2).
> **Q7**: The cognitive model of student is over simplied.
**A7**: In fact, the student model in traditional CAT and psychometrics is such a "simple". The most widely used user model in CAT is IRT, which is usually a form of logistic regression. Meanwhile, the cross-entropy loss of the L2-regularized IRT is strongly convex, which is the assumption of Theorem 1. In recent years, more "complex" models have emerged, such as NeuralCDM, which uses neural networks to model students. Like optimization theory research, we cannot prove the effectiveness of Theorem on a black-box neural network, so we evaluate it in experiment part.
Reference:
[1] Embretson, Susan E., and Steven P. Reise. Item response theory. Psychology Press, 2013.
[2] Baker, Frank B., and Seock-Ho Kim, eds. Item response theory: Parameter estimation techniques. CRC press, 2004.
[3] Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response! I still think the clarity of the paper can be further improved by using proper notations and making the testing setup more clear. For example, it would help to clarify the mathematical model from which the sample $(q_t, y_t)$ have been assumed to be generated from. Regarding notation, in line 110, I am not sure why we need limit why not say $\theta^* = \theta^{|Q|}$ when $|Q|$ is a fixed positive integer. Further, if question bank size is fixed, the learner would not gain anything by seeing repeated questions. In that case, the limit in $\theta^* \approx \lim_{t\rightarrow \infty} \theta^t \approx \theta_0$ does not make much sense. Further, I would appreciate a discussion section on 'true' ability of the student mathematically and if there would be any approximation error when one tries to approximate the 'true' ability by parametrized family $\Theta$. For now, I would keep my score.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you for your follow-up questions! We are grateful for your commitment to helping us improve the paper. The following are responses to your questions.
> **Q1**: The mathematical model from which the sample $(q_t, y_t)$ have been assumed to be generated from.
**A1**: Since the ground truth of student ability $\theta_0$ is not available, a feasible and commonly used evaluation method in CAT field is simulation experiment (Task 2) [1]. Specifically, we artificially generate their $\theta_0$ and further simulate student-question interaction process within CAT systems (line 223).
Let me give a concrete example: suppose the student model is 2PL-IRT: $p_j(\theta)=sigmoid(\alpha_j(\theta - b_j))$, where each question $j$ is characterized by two parameters: the discrimination parameter ($\alpha_j$) and the difficulty parameter ($b_j$). To generate the true ability values ($\theta_0$) for our simulated $N$ students, we can sample $\\{\theta_0^1, \theta_0^2,...\theta_0^N\\}$from a known distribution, such as a normal distribution. These ability values represent the latent trait being measured by the test questions. Therefore, given one generated student $i$ whose ability is $\theta_0^i$, for any question $q_k$ selected by CAT, the probability $p_k(\theta_0^i)$ of the correct response can be calculated by the above IRT, and the corresponding response label $y$ can be generated according to the threshold (e.g., 0.5).
> **Q2**: Why we need limit why not say $\theta^*=\theta^{|Q|}$ when $|Q|$ is a fixed positive integer.
**A2**: Yes, $\theta^*=\theta^{|Q|}$ as you said, because $\theta^*$ is the estimate which is "estimated by his/her full responses to the entire bank $Q$" (Proposition 1). The reason why we use the equivalent limit form ($\theta^*=\lim\limits_{t\to|Q|}{\theta}^t$) in the proof is to prove: $\theta^* \approx \lim\limits_{t\to \infty}{\theta}^t \approx \theta_0$ ($t\in[0,|Q|]$), which also has a limit form. We would have wanted the reader to understand our proof more intuitively. Thanks for your feedback, we will improve this notation in the future to avoid confusion.
> **Q3**: if question bank size is fixed, the learner would not gain anything by seeing repeated questions. In that case, the limit in $\theta^* \approx \lim\limits_{t\to \infty}{\theta}^t \approx \theta_0$ does not make much sense.
**A3**: In general machine learning problem, the consistency property [2] in maximum likelihood estimation (i.e., $\lim\limits_{t\to\infty}p\left(\left|{\theta}^t-\theta_0\right|\geq\epsilon\right)=0$) means: *when the number of observed samples $t$ is huge (infinity), the estimated value $\theta^t$ is almost the same as the true value $\theta_0$*. An implicit premise of it is that the samples must be (mostly) **diverse**. Otherwise, if the 60,000 handwriting images in the MNIST dataset are all the same, the training result must be bad undoubtedly as you said. Therefore, in the corresponding CAT scenario, the meaning of $t\to \infty$ is essentially that student needs to response to as many (and diverse) questions as possible. This requires the question bank $Q$ to approach infinity. To verify this, we use simulation experiments (line114): Figure 2(a) shows that when the bank size exceeds 300, the estimated $\theta^* \approx \theta_0$. (By the way, the questions that have been selected need to be removed from the candidates)
We truly appreciate your suggestion, and we will include the details of our discussion above (e.g., mathematical definition of student true ability) in a future version. Please feel free to share any further insights or suggestions you might have.
Reference:
[1] Vie J J, Popineau F, Bruillard É, et al. A review of recent advances in adaptive assessment[J]. Learning analytics: Fundaments, applications, and trends: A view of the current state of the art to enhance e-learning, 2017: 113-142.
[2] Eliason S R. Maximum likelihood estimation: Logic and practice[M]. Sage, 1993.
---
Rebuttal 2:
Comment: I will corroborate the authors that the cognitive model used in this paper is standard in the testing literature and is used universally in practice e.g. in the GRE, Deep Knowledge Tracing, learning apps, etc. | null | null | null | null | null | null |
Uncertainty Estimation for Safety-critical Scene Segmentation via Fine-grained Reward Maximization | Accept (poster) | Summary: As existing approaches for uncertainty estimation have been limited by the guidance for calibrating the prediction risk and model confidence, the paper proposes a novel fine-grained reward maximization (FGRM) framework, which addresses uncertainty estimation by reinforcement learning based model tuning with an uncertainty metric related reward function. It adopts the fisher information matrix for capturing parameter importance, acting as weights for fine-grained updates. Besides, evidential pre-training is incorporated to distinguish between aleatoric and epistemic uncertainty. The experimental results on two surgical datasets show FGRM improves uncertainty estimation for both ID and OOD data while not harming original segmentation performance.
Strengths: 1. The paper is well motivated for the uncertainty estimation problem, with reasonable application of fisher information matrix and evidential learning. Though built upon previous works that uses RL for guided training, I think the fine-grained update mechanism is novel and the contributions are clear.
2. The presentation is clear and easy to follow.
3. The empirical performance is great comparing to baselines.
Weaknesses: 1. While enjoying superior empirical performance, it would be better to provide some theoretical insights for FGRM like the uncertainty bounds, which is significant in the uncertainty estimation context.
2. The experiment section has some ambiguous points (see limitations below).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In line 306, why you say `without harming the accuracy` instead of `while promoting the accuracy` with Dice performance increases during reward tuning?
2. As you use evidential learning to distinguish between aleatoric and epistemic uncertainty, what is the principle behind how it benefits your uncertainty estimation? You haven't explained this clearly in the ablation study.
3. What does `efficient sampling` mean in the lower part of Figure 1? How does your sample algorithm differs from simple mini-batch sampling?
4. Typo in line 324: safter => safer.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. The paper lacks theoretical provements for FGRM like the uncertainty bounds, which is significant in the uncertainty estimation context.
2. In Figure 3 and Figure 4(b), the paper doesn't record FGRM's performance on the OOD counterpart, which is an important contribution claimed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback regarding the novelty of our proposed fine-grained parameter update scheme and the great empirical performance of our method. Our responses to your comments are as follows.
> * While enjoying superior empirical performance, it would be better to provide some theoretical insights for FGRM like the uncertainty bounds, which is significant in the uncertainty estimation context.
Reply: Thank you for the comment. We would like to provide theoretical insights regarding our proposed fine-grained parameter update scheme. For uncertainty metric reward maximization, directly adopting the policy gradient method can be sub-optimal, which can be proofed with the closed-form solution of the policy network. In RL tuning process, the overall optimization objective can be expressed as, $ J(\phi) = E_{({x}, {y})}[R({\mu}, {\hat{y}}, {y}) - \beta \text{log}(\frac{\pi_{\phi}({x})}{\pi_{\hat{\theta}}({x})})] $. According to the policy gradient theorem, the gradient with respect to the first term can be calculated as $R(\mu, \hat{y}, y) \nabla_{\phi} \text{log} \pi_{\phi}(\mu, \hat{y}|x)$. Updating parameters $\phi$ based on the policy gradient is towards to $\pi_{\phi}^* \in \{ \pi_{\phi} | \nabla_{\pi_{\phi}}\mathcal{J(\phi)} = 0 \}$, which can be expressed as, $\pi_{\phi}^* = \frac{\pi_{\theta}\text{exp}(R(\boldsymbol{\mu}^i, \boldsymbol{\hat{y}}^i, \boldsymbol{y}^i)/\beta)}{\int \pi_{\theta}\text{exp}(R(\boldsymbol{\mu}^i, \boldsymbol{\hat{y}}^i, \boldsymbol{y}^i)/\beta) d(\boldsymbol{x}^i, \boldsymbol{y}^i)}$. The solution $\pi_{\phi}^*$ can be viewed as weighting the MLE model by the normalized exponential reward. The limited space for exploration constrains the optimization, resulting in sub-optimal solutions. Our proposed fine grained update mechanism tackles this problem by assigning rewards separately to each network parameter according to its influence on the uncertainty reward.
> * In line 306, why you say without harming the accuracy instead of while promoting the accuracy with Dice performance increases during reward tuning?
Reply: Thank you for the comment. The choice of phrasing is intended to highlight that our method aims to improve uncertainty estimation without negatively impacting the accuracy of the segmentation predictions. This emphasizes that our primary focus is on enhancing uncertainty estimation. However, it is also valid to use “while promoting accuracy” as evidenced by the increasing Dice performance during reward tuning.
> * As you use evidential learning to distinguish between aleatoric and epistemic uncertainty, what is the principle behind how it benefits your uncertainty estimation? You haven't explained this clearly in the ablation study.
Reply: The principle behind how evidential learning benefits our uncertainty estimation lies in its capability to provide a better understanding of uncertainty types. Specifically, aleatoric uncertainty emerges from the inherent, irreducible variability presented in the data itself, such as when dealing with unclear tissue boundaries in surgical images. differently, epistemic uncertainty arises due to limited knowledge about unseen data. By being able to differentiate these two types of uncertainty, our model gains the capacity to provide more informative uncertainty estimates. In the ablation study, we further added a model named “Vanilla EDL” that exclusively uses evidential learning without the incorporation of the fine-grained RL algorithm. This ablation experiment aims to investigate the contribution of evidential learning by directly comparing it with the “Vanilla MLE” model. The results, presented in the table below, clearly indicate that evidential learning enhances uncertainty estimation by enabling the model to capture different types of uncertainty more accurately. This helps the model to better understand the limits of its predictions and produce more meaningful uncertainty estimates.
**LC segmentation**
| | | ID calibration | | OOD inference | |
|-----------|-------|----------------|--------|---------------|------|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑|
| Vanilla MLE | 19.46 | 2.88 | 71.22 | 0.88 | 0.08 |
| Vanilla EDL | 12.98 | 4.20 | 73.66 | 1.45 | 0.21 |
| FGRM | 9.63 | 5.87 | 74.88 | 1.85 | 0.47 |
**ESD segmentation**
| | | ID calibration | | OOD inference | |
|-----------|-------|----------------|--------|---------------|------|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| Vanilla MLE | 17.76 | 2.08 | 84.12 | 0.82 | 0.10 |
| Vanilla EDL | 12.15 | 3.68 | 86.20 | 1.40 | 0.18 |
| FGRM | 10.42 | 4.72 | 87.23 | 1.78 | 0.54 |
> * What does efficient sampling mean in the lower part of Figure 1? How does your sample algorithm differs from simple mini-batch sampling?
Reply: The term “efficient sampling” in Fig. 1 refers to the inference process of uncertainty estimation. Our method only requires a single forward pass to obtain the uncertainty prediction, which is highly efficient compared to other ensemble or probabilistic-based methods that require multiple forward passes for the uncertainty estimation.
> * In Figure 3 and Figure 4(b), the paper doesn't record FGRM's performance on the OOD counterpart, which is an important contribution claimed in the paper.
Reply: For Fig. 3 and Fig. 4(b), we further added ablation studies on the OOD scenario using the endoscopic submucosal dissection dataset, including contribution of each key component, effect of the fine-grained parameter update, and the progression of uncertainty estimation and segmentation prediction. The results are presented in Fig. 3(b)-(d) of the uploaded PDF file. These results on the OOD scenario reaffirm the findings observed in the ID scenario.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for the effort during the rebuttal, I will keep the positive assessment of the paper. | Summary: This paper introduces a novel Fine-Grained Reward Maximization (FGRM) framework to improve uncertainty estimation in deep segmentation models for safety-critical applications. The approach uses a reinforcement learning-based model tuning paradigm to optimize and calibrate the model. The FGRM framework is the first to leverage reinforcement learning for uncertainty estimation in safety-critical vision tasks, demonstrating improved performance on two surgical scene segmentation datasets.
Strengths: 1. The paper is well-written, logically organized, and effectively explains the novel aspects of the proposed framework.
2. The method has been rigorously tested on two large safety-critical surgical scene segmentation datasets, demonstrating superior performance.
Weaknesses: 1. The evaluation metrics Uncertainty error mutual information (MI), Pixel Ratio (PR), and Box Ratio (BR) can not be found in [18].
2. In the related works and experiments, lack of discussion and comparison with the auxiliary network-based method, e.g. [18] and Corbiere, Charles, et al. "Confidence estimation via auxiliary models."
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: While the paper is largely comprehensive, there are some areas where it falls short:
1. Comparison with Other Methods: The paper claims superiority over state-of-the-art methods, however, it lacks a direct comparison or in-depth discussion with recent popular general uncertainty estimation methods, such as "Confidence estimation via auxiliary models" or methods specifically tailored for medical image segmentation like [18]. Including these comparisons would validate their claims more convincingly.
2. Metric Citation Issue: There seems to be a mistake in the metric citation, which raises concerns about the overall quality of the experimental setup.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable comments, providing us with opportunities for improvement and clarification. We would like to address each of your comments in detail as follows.
> * Comparison with other methods: In the related works and experiments, lack of discussion and comparison with the auxiliary network-based method, e.g. [18] and Corbiere, Charles, et al. "Confidence estimation via auxiliary models."
Reply: Following your suggestion, we have added the raised auxiliary network-based method ConfidNet (i.e., Corbiere Charles et al [R1]) into our experiments for comparison. The tables below show the performance of our FGRM method and ConfidNet on two surgical video datasets as well as a urban scene dataset of Cityscape (an additional general dataset). The results show that our method achieves better uncertainty estimation performance compared to Corbiere Charles et al. We will include these additioanl experiments in final version.
**LC segmentation dataset**
| | | ID calibration | | OOD inference | |
|-----------|-------|----------------|--------|---------------|------|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| ConfidNet | 16.76 | 4.58 | 71.32 | 1.41 | 0.32 |
| FGRM (ours) | 9.63 | 5.87 | 74.88 | 1.85 | 0.47 |
**ESD segmentation dataset**
| | | ID calibration | | OOD inference | |
|-----------|-------|----------------|--------|---------------|------|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| ConfidNet | 15.52 | 3.53 | 84.41 | 1.19 | 0.26 |
| FGRM (ours) | 10.42 | 4.72 | 87.23 | 1.78 | 0.54 |
**Cityscapes dataset**
| | | ID calibration | | OOD inference | |
|-----------|-------|----------------|--------|---------------|------|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| ConfidNet | 10.14 | 4.69 | 78.15 | 1.18 | 0.19 |
| FGRM (ours) | 8.52 | 6.97 | 79.23 | 1.77 | 0.48 |
> * The evaluation metrics Uncertainty error mutual information (MI), Pixel Ratio (PR), and Box Ratio (BR) can not be found in [18].
Reply: Thank you for the comment. In paper [18], a comprehensive benchmark of various uncertainty estimation methods are conducted. The metric of uncertainty-error overlap in [18] is a discrete version of the uncertainty error mutual information used in our experiment. We agree the importance of adding explicit references to the papers that thoroughly describe these metrics. Hence, we would like to add reference [R2] for the metric of uncertainty error mutual information and reference [R3] for the metrics of pixel ratio and box ratio.
Thank you very much for the careful review, it definitely makes our paper more rigorous. But we also would like to assure the reviewer regarding the quality and soundness of our experimental setup. There is no need to worry, and we will release all our code, data and models upon publication of the paper.
References:
[R1] Corbiere, C., Thome, N., Saporta, A., Vu, T.H., Cord, M. and Perez, P. Confidence estimation via auxiliary models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10), 2021, pp.6043-6055.
[R2] Judge, T., Bernard, O., Porumb, M., Chartsias, A., Beqiri, A. and Jodoin, P.M. Crisp-reliable uncertainty estimation for medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention. 2022, pp. 492-502.
[R3] Zepf, K., Wanna, S., Miani, M., Moore, J., Frellsen, J., Hauberg, S., Feragen, A. and Warburg, F. Laplacian Segmentation Networks: Improved Epistemic Uncertainty from Spatial Aleatoric Uncertainty. arXiv preprint arXiv:2303.13123. 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. After considering your reply, I still have reservations regarding the comprehensiveness of your experimental evaluations:
Detailed Discussion and Comparisons: I noticed a lack of a comprehensive and detailed discussion when comparing your method to other recent state-of-the-art (SOTA) methods for general uncertainty estimation, such as the auxiliary network-based method. A deeper dive into the pros, cons, and unique features of each method would enhance the quality of your comparison.
Medical Imaging Comparisons: I'd like to emphasize the importance of comparing your approach with methods tailored specifically for medical image segmentation, such as [17] and [18]. These works are benchmarks in the field, and their omission from the comparison is noticeable.
Evaluation Metrics References: While you've referenced the evaluation metrics from [17], [18], and [R3], I observed that there's no direct comparison made with these methods. Given that [17] and [18] are recognized as the latest SOTA for medical image segmentation, such comparisons would be highly relevant and beneficial.
Due to these concerns, I'm inclined to think that the experimental design might have certain biases and lacks comprehensiveness. Consequently, I feel it's appropriate to adjust my rating for this submission.
---
Rebuttal Comment 1.2:
Title: Thank You and Our Further Response
Comment: Thank you very much for taking time to read our response. We aim to address your remaining concerns as follows.
*Detailed Discussion and Comparisons*: Our paper has compared with a range of SOTA approaches, including probabilistic method [11], model ensemble-based methods [22][23], deep deterministic methods [10][35], and evidence-based method [6]. We have also included a comparison with an auxiliary network-based method ConfidNet in our response. We acknowledge the importance of a detailed discussion for in-depth comparison, but space limits prevented us from including such analysis in our original submission. Following your suggestion, we hereby provide a detailed analysis of pros and cons for each type of method. We will include them into the final version to enhance the writing quality of our comparisons.
The probabilistic method MC Dropout [11] is simple in implementation using dropout layers, but it depends on multiple forward runs for uncertainty estimation. The model ensemble-based method Deep Ensembles [23] combines the outputs of multiple models for reliable uncertainty estimation, but the training of multiple models has computation burdens. Layer Ensemble [22] attaches multiple heads to intermediate layers of a network, achieving efficient uncertainty estimation with a single forward pass. Deep deterministic methods [35][10] quantify uncertainty through geometrical or statistical properties of hidden features, providing accurate out-of-distribution uncertainty estimation, but their Bi-lipschitz regularization can be unstable in deeper models. The evidence-based method [6] distinguishes aleatoric and epistemic uncertainty, but due to the lack of evidence ground truth, the model can only be trained with observed one-hot labels, which may have the tendency to peak the second-order distribution. ConfidNet designs an auxiliary network to learn a novel confidence criterion, making it applicable to any pre-trained segmentation model, albeit requiring an additional network for confidence estimation.
Despite the notable achievements in these methods, they share one common limitation, i.e., relying on models trained on task objectives without considering the uncertainty estimation metric during the learning process. The superior performance of our method can be attributed to the following key factors. Firstly, our method explicitly optimizes uncertainty estimation metrics via a reward function, thereby directly calibrating prediction risk and model confidence. Furthermore, our fine-grained parameter updates scheme enables the effective parameter exploration of the policy network. In addition, our incorporated evidential learning layer allows us to provide more informative estimates of different uncertainty types. As shown in Table 1&2 in our paper, our method also has the advantage of less inference time, a critical factor for real-time intra-operative healthcare applications. The limitations of our method include the need for an additional RL reward maximization process and the current use of two reward functions for different types of uncertainty.
*Medical Imaging Comparison*: For our comparison with auxiliary network-based methods in our rebuttal, we focus on ConfidNet (TPAMI’21) because it is a more recent work than [18] (MICCAI’19). Given that ConfidNet and [18] share similar insights by leveraging an auxiliary network, we only added ConfidNet in our comparison due to limited rebuttal time. However, we understand your point that [18] is a more relevant work as it is tailored for medical image segmentation. We are implementing [18] now and will post the results asap.
Meanwhile, we would like to bring your attention to Table 1&2 in our paper, where we have included a MICCAI 2022 paper with the method of Layer Ensemble [22] for comparison. It is exactly a recent SOTA uncertainty estimation method designed for medical image segmentation. Our proposed method outperforms this MICCAI'22 paper by a large margin. Hope this can help relieve your concern.
*Evaluation Metrics References*: Thank you for your further comments on direct comparison on the references. We agree that it is helpful to improve the paper, and actively try to do it. For the comparison with [17], since their code is not released, we tried our best to re-implement their method based on the information provided in their paper. Unfortunately, it is unstable in learning the joint latent space of input images and corresponding segmentation maps. As a result, the obtained uncertainty estimation was unsatisfactory, even presenting lower performance than baseline. We acknowledge the importance of including more SOTA medical imaging methods into comparison, so we are currently conducting experiments on [18] and [R3] to make our experimental evaluation more comprehensive. We believe our comparison with various types of general methods as well as dedicated medical imaging methods will effectively demonstrate the effectiveness of our method.
---
Reply to Comment 1.2.1:
Title: Comparison with More Medical Imaging Methods
Comment: Thank you for your patience. We have finished the implementation of the auxiliary feat. and auxiliary segm. networks from [18] based on their publicly released code. The tables below present the quantitative comparison of our FGRM method with the auxiliary feat. [18] on LC and ESD segmentation datasets. We compare with the results of the auxiliary feat. in the tables since auxiliary feat. obtains slightly better performance than auxiliary segm. on the two datasets, even though their performances are largely comparable. We can see that our method consistently outperforms [18] across all evaluation metrics. The superior performance of our method demonstrates the benefits of explicit model tuning with uncertainty estimation metrics-based RL algorithm. We will include the comparison with [18] into the final version to provide a more comprehensive analysis against uncertainty estimation methods tailored for medical image segmentation.
**LC segmentation dataset**
| | | ID calibration | | OOD inference | |
|:---------:|:-----:|:--------------:|:------:|:-------------:|:----:|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| Auxiliary feat.| 16.87 | 4.35 | 71.32 | 1.33 | 0.28 |
| FGRM | 9.63 | 5.87 | 74.88 | 1.85 | 0.47 |
**ESD segmentation dataset**
| | | ID calibration | | OOD inference | |
|:---------:|:-----:|:--------------:|:------:|:-------------:|:----:|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| Auxiliary feat.| 15.76 | 3.41 | 84.41 | 1.14 | 0.18 |
| FGRM | 10.42 | 4.72 | 87.23 | 1.78 | 0.54 | | Summary: This paper proposes a novel method for uncertainty estimation. A segmentation network is first pre-trained by considering a generative model where the segmentation of an input $x$ is drawn from a Dirichlet distribution, which enables MLE.
The main contribution of the paper is then the reinforcement learning (RL) algorithm proposed whereby a novel reward function is used to maximise uncertainty estimation. Lastly, the authors posit that not all parameters in the network should be updated accordingly and use ideas from EWC in continual learning to learn parameter-specific update rules.
To learn aleatoric and epistemic uncertainty quickly, the authors assume a generative model where the segmentation is drawn from a Dirichlet distribution, thus optimising the MLE through integration of the conjugate prior and the likelihood function. The aleatoric and epistemic (calibrated) uncertainties can consequently be obtained through the learned $\alpha$ parameter.
Strengths: * This is an excellent paper. It is well written, clear in its intentions. The methodology is clearly presented (Figure 1 is excellent), it is clear how the algorithm works and why each section of the method was developed. The results compared to various baselines help consolidate the strength of the paper.
* There are many novelties in this paper, which in isolation might already be interesting contributions. Taken as a whole, the authors have presented a compelling piece of work. The idea to combine i) evidential learning (parameterising the segmentation as $s ~ D(p|\alpha)$ for uncertainty ii) using EWC to learn parameter updates necessary in the RL algorithm for calibrating uncertainty and iii) the RL reward function are interesting and novel. The results are subsequently impressive.
Weaknesses: * I would have liked to have seen evidence of the calibration on a more diverse set of datasets to better show the applicability and performance of the algorithm. A diverse set such as Cityscapes (real scenes), a dataset such as BraTS (tumour segmentation from MRI scans) in addition to the surgical videos used in this paper would have strengthened the work.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * Did the authors consider also papers such as those which propose learning heteroscedastic uncertainty with MC dropout as a baseline such as in Kendal et al.(https://arxiv.org/abs/1703.04977). This method is used quite significantly and it would have been nice to see how it performs in comparison.
* It would be nice to see uncertainty maps of other models to compare. Is this possible?
* It is not entirely clear how the RL algorithm mitigates confidence miscalibration and OOD over confidence. This is only really mentioned in passing in Section 3.4 in the implementation details for Equation 1
* In the experiments, did the authors consider a model where only evidential learning was used as another baseline to compare against vanilla MLE? It would be nice to understand how strong of a baseline that is and whether this could be used in isolation.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful comments and positive feedback regarding the recognition of “many novelties” in our paper, clear presentation of our methodology and strength of experimental evaluation. We would like to provide the necessary clarifications and improvements in response to your comments as follows.
> * Evidence of the calibration on a more diverse set of datasets to better show the applicability and performance of the algorithm.
Reply: Thank you for your comment. Following your suggestion, we have conducted additional experiments on the Cityscapes dataset to evaluate the uncertainty estimation of segmentation on real urban scenes. The results are presented below, along with comparisons to various baselines. It is evident from the results that our method outperforms all the comparison methods for uncertainty estimation on the Cityscapes dataset. These results demonstrate the effectiveness of our method in model uncertainty calibration on diverse datasets, including both surgical videos in medical applications and urban scenes in natural images.
**Cityscapes dataset**
| | | ID calibration | | OOD inference | |
|-----------|-------|----------------|--------|---------------|------|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| Backbone | 11.94 | 3.51 | 78.15 | 0.70 | 0.06 |
| LayerEnsemble | 10.40 | 4.73 | 79.04 | 1.15 | 0.11 |
| DeepEnsemble | 10.66 | 4.91 | 79.10 | 1.37 | 0.26 |
| LDU | 9.14 | 5.75 | 78.69 | 1.55 | 0.35 |
| MC-dropout | 11.15 | 4.09 | 78.46 | 1.23 | 0.19 |
| NatPN | 9.32 | 5.30 | 78.45 | 1.62 | 0.34 |
| FGRM | 8.52 | 6.97 | 79.23 | 1.77 | 0.48 |
> * Did the authors consider also papers such as those which propose learning heteroscedastic uncertainty with MC dropout as a baseline such as in Kendal et al.(https://arxiv.org/abs/1703.04977). This method is used quite significantly and it would have been nice to see how it performs in comparison.
Reply: Thanks for the comment. We would like to draw your attention to Table 1 and Table 2 in the paper, where we have compared our method with MC dropout. The results show that our method outperforms MC dropout by a large margin across all the evaluation metrics.
> * It would be nice to see uncertainty maps of other models to compare. Is this possible?
Reply: Thanks for the comment. Please kindly note that we have visualized the uncertainty maps of other models in Appendix Fig. 1. For clearer illustration, we have also updated Appendix Fig. 1 by including error maps of segmentation predictions (please refer to Fig. 2 of the uploaded PDF file). From the figure, we can see that our model generates uncertainty estimation maps that present better correlation with incorrect predictions when compared to other methods.
> * It is not entirely clear how the RL algorithm mitigates confidence miscalibration and OOD over confidence. This is only really mentioned in passing in Section 3.4 in the implementation details for Equation 1
Reply: Thank you for the comment. For in-distribution (ID) calibration, our reinforcement learning (RL) algorithm aims to maximize the reward of Expected Calibration Error (ECE) metric. According to the formula of ECE metric shown in Appendix A.1, ECE metric quantifies the average disparity between confidence and accuracy, that is it effectively evaluates whether predictions characterized by higher confidence levels (indicative of lower uncertainty) are more likely to be accurate. By maximizing this metric with the RL algorithm, the confidence miscalibration can be reduced. For out-of-distribution (OOD) inference, our objective is to enable the model to assign higher uncertainty to the OOD regions compared with their ID counterparts. To achieve this, we have devised a reward function as the ratio between the uncertainty of OOD regions and that of ID regions. By maximizing this designed reward with the RL algorithm, the model learns to mitigate the OOD over confidence.
> * In the experiments, did the authors consider a model where only evidential learning was used as another baseline to compare against vanilla MLE? It would be nice to understand how strong of a baseline that is and whether this could be used in isolation.
Reply: Thanks for the comment. Following your suggestion, we have included a model named “vanilla EDL”, in which only evidential learning is used as a comparison with vanilla MLE. The results in the tables below show that vanilla EDL achieves superior performance compared to vanilla MLE, demonstrating the benefits of evidential learning in uncertainty estimation. Moreover, our FGRM model further outperforms vanilla EDL, attributing to our RL framework with the designed fine-grained parameter update mechanism.
**LC segmentation**
| | | ID calibration | | OOD inference | |
|-----------|-------|----------------|--------|---------------|------|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| Vanilla MLE | 19.46 | 2.88 | 71.22 | 0.88 | 0.08 |
| Vanilla EDL | 12.98 | 4.20 | 73.66 | 1.45 | 0.21 |
| FGRM | 9.63 | 5.87 | 74.88 | 1.85 | 0.47 |
**ESD segmentation**
| | | ID calibration | | OOD inference | |
|-----------|-------|----------------|--------|---------------|------|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| Vanilla MLE | 17.76 | 2.08 | 84.12 | 0.82 | 0.10 |
| Vanilla EDL | 12.15 | 3.68 | 86.20 | 1.40 | 0.18 |
| FGRM | 10.42 | 4.72 | 87.23 | 1.78 | 0.54 |
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the detailed rebuttal and the additional experiments on i) Cityscapes and ii) results for vanilla EDL.
I think this is a strong paper and the additional work provided by the authors strengthens the paper. My original score of 8 stands but I would defend publication of this manuscript.
One note, the authors state in the rebuttal `Reply: Thanks for the comment. We would like to draw your attention to Table 1 and Table 2 in the paper, where we have compared our method with MC dropout. The results show that our method outperforms MC dropout by a large margin across all the evaluation metrics.`
However, learned heteroscedastic uncertainty (https://arxiv.org/abs/1703.04977)) is not the same as MC dropout. In that paper, the authors develop a branched network (akin to hard-parameter sharing in multi-task learning) where one branch predicts the mean and the other the variance on a pixel-wise basis. This is fed into the loss function. MC-dropout can be used in addition to this method to estimate both epistemic and aleatoric uncertainty whereby the total uncertainty is the sum.
---
Reply to Comment 1.1.1:
Title: Thank You and Additional Experiments
Comment: Thank you for taking time to read our response and recognizing the strengths of our paper, along with the improvements from our additional work. We apologize that we initially misunderstood your suggested comparison method. Following your suggestion, we provide a comparison of our method with the learned heteroscedastic uncertainty (LHU, https://arxiv.org/abs/1703.04977) method in the tables below on LC and ESD segmentation datasets. Our method achieves superior performance on both ID and OOD scenarios, which demonstrates the effectiveness of our RL-based uncertainty estimation paradigm.
**LC segmentation dataset**
| | | ID calibration | | OOD inference | |
|:---------:|:-----:|:--------------:|:------:|:-------------:|:----:|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| LHU | 17.14 | 3.41 | 72.57 | 1.16 | 0.12 |
| FGRM | 9.63 | 5.87 | 74.88 | 1.85 | 0.47 |
**ESD segmentation dataset**
| | | ID calibration | | OOD inference | |
|:---------:|:-----:|:--------------:|:------:|:-------------:|:----:|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| LHU | 16.62 | 3.36 | 83.31 | 1.12 | 0.14 |
| FGRM | 10.42 | 4.72 | 87.23 | 1.78 | 0.54 | | Summary: This paper empirically studied uncertainty estimation in safety-critical scene segmentation. The authors employed reinforcement learning (RL) methodologies, including fine-grained reward maximization (FGRM) framework and fisher information matrix for parameter updates. Additionally, to calibrate prediction risk and model confidence, the authors proposed a new reward function closely tied to uncertainty estimation. Through the experiments, the authors concluded the novel findings: (1) their method outperformed different types of state-of-the-art uncertainty estimation methods across all evaluation metrics; (2) the fine-grained parameter update mechanism improved the effectiveness of model tuning based on the reward function. Furthermore, the experimental results demonstrated the superiority of the proposed method on two medical datasets of safety-critical applications, specifically laparoscopic cholecystectomy scene segmentation and endoscopic submucosal dissection scene segmentation.
Strengths: The reviewer significantly understands the significance and importance of the task proposed in this paper. The findings and methodologies can be significantly utilized in safety-critical scenarios such as medical applications. In addition, the use of reinforcement learning (RL) for uncertainty estimation in model tuning is a novel approach, and the proposed reward function specifically designed for uncertainty estimation is innovative. Furthermore, since the proposed method consistently outperformed state-of-the-art uncertainty estimation methods on evaluation metrics, the contribution of this paper is noteworthy. Additionally, the strengths of this paper can be summarized as follows:
1. The paper is easy to understand and easy to follow, making it accessible to a wide audience.
2. The authors clearly demonstrated the experimental results and effectively derived novel findings, supporting their claims.
3. The experimental results strongly support the effectiveness of the proposed method, highlighting its superiority over existing approaches.
Weaknesses: The reviewer agrees that the task addressed in this paper has several strengths and is significantly relevant to the community, but this paper is not technically novel. The main reasons are listed below (See Limitations).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. The application scope is significantly limited (only applicable to endoscopy modality and only for medical imaging). It would be better if the proposed framework could be applied to a broader range of applications. It would be beneficial to have further discussions on this issue.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: 1. Limited novelty. The differences (novelty) compared to existing methods are not clearly demonstrated. In the rebuttal stage, the reviewer strongly hopes that the authors will emphasize the novelty of their proposed method and clearly highlight its differences from existing approaches, clarifying what is new and innovative.
2. Additionally, in the manuscript, the authors mention the inclusion of out-of-distribution (OOD) data alongside in-distribution data. The reviewer suggests justifying the use of OOD data in the experiments to strengthen the logical flow of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive comments on the significance and importance of the task tackled in our work, the novelty of using reinforcement learning (RL) in uncertainty estimation, and our strong experimental results. Our detailed responses to your comments are as follows.
> * Applying the proposed framework to a broader range of applications.
Reply: Thank you for your suggestions. To validate that our method can be applied to a diverse set of datasets, we further applied our method to the Cityscapes dataset for urban scene segmentation, which significantly differs from the surgical video datasets used in our paper. The Cityscapes dataset represents a diverse and realistic collection of urban scene images gathered from 50 cities with the semantic segmentation task involving 30 classes. The uncertainty estimation results on the Cityscapes dataset are shown in the table below. We can see that our method consistently outperforms all the comparison methods, indicating the potential of our method on the safety-critical autonomous driving scenarios.
**Cityscapes dataset**
| | | ID calibration | | OOD inference | |
|-----------|-------|----------------|--------|---------------|------|
| Method | ECE ↓ | MI ↑ | Dice ↑ | PR ↑ | BR ↑ |
| Backbone | 11.94 | 3.51 | 78.15 | 0.70 | 0.06 |
| LayerEnsemble | 10.40 | 4.73 | 79.04 | 1.15 | 0.11 |
| DeepEnsemble | 10.66 | 4.91 | 79.10 | 1.37 | 0.26 |
| LDU | 9.14 | 5.75 | 78.69 | 1.55 | 0.35 |
| MC-dropout | 11.15 | 4.09 | 78.46 | 1.23 | 0.19 |
| NatPN | 9.32 | 5.30 | 78.45 | 1.62 | 0.34 |
| FGRM | 8.52 | 6.97 | 79.23 | 1.77 | 0.48 |
> * The differences (novelty) compared to existing methods are not clearly demonstrated.
Reply: Thank you for your comments. We would like to highlight the novelty of our proposed method based on the following three key aspects.
i. Our proposed FGRM framework is **the first work** to directly optimize uncertainty estimation reward function in a RL paradigm for effective uncertainty estimation. Existing uncertainty estimation methods are based on standard supervised learning with task-specific objectives, such as cross-entropy loss or Dice loss, which are not directly related to the uncertainty estimation task. Instead, our method leverages the RL paradigm, which enables us to optimize a reward function closely tied to uncertainty estimation metrics for explicit model tuning of uncertainty estimation. This is not feasible in standard supervised learning due to the discrete and non-differentiable nature of these metrics, which are essential for effective back propagation. Such framework novelty is also acknowledged by reviewer WcWT.
ii. We design a novel fine-grained parameter update mechanism based on fisher information matrix for the reward maximization in RL. Existing works on RL in vision tasks typically adopt the policy gradient theorem to estimate the gradient of the reward term, in which the gradient of each parameter is weighted by the reward uniformly. Instead, our designed mechanism imposes fine-grained reward-weighting for each network parameter individually so as to enable more effective model tuning based on the reward function. Such fine-grained parameter update design novelty is also acknowledged by reviewer WcWT and reviewer iwgk.
iii. We leverage evidential deep learning for the pre-training of the segmentation backbone. Evidential learning explicitly parameterizes the conjugate prior of the categorical distribution, enabling us to quantify the aleatoric uncertainty and epistemic uncertainty separately. This is in contrast to standard segmentation models, where the probability values from the softmax layer mix both types of uncertainty together. The novelty of incorporating evidential learning is also acknowledged by reviewer WcWT and reviewer mEtW.
Based on the three points mentioned above, we believe that our work brings new insights and valuable contributions to the task of uncertainty estimation, which would be of great interest to the research community.
> * Additionally, in the manuscript, the authors mention the inclusion of out-of-distribution (OOD) data alongside in-distribution data. The reviewer suggests justifying the use of OOD data in the experiments to strengthen the logical flow of the paper.
Reply: Thanks for your suggestion. The logic to include OOD data in our experiments is to provide a comprehensive evaluation of our proposed uncertainty estimation method. In practical deployment scenarios, segmentation models often encounter data that deviates from their training distribution, resulting in potentially inaccurate segmentation predictions. By including OOD data, we aim to simulate these real-world challenges and validate the effectiveness of our method when dealing with OOD data, which is crucial for reliable uncertainty estimation. | Rebuttal 1:
Rebuttal: We appreciate the reviewers for taking their time to review and provide constructive feedback. We are glad to see that most reviewers recognized the novelty of our method, the strength of our experimental evaluations, and the good presentation of our paper.
We have made every effort to address all the concerns raised, including conducting additional experiments on the Cityscapes dataset to validate that our method can have broad usage, adding more comparison experiments and ablation studies, and providing clarifications where needed. Our detailed responses have been provided to each reviewer separately in below. We have also uploaded a PDF file containing updated figures for a clearer visualization of uncertainty estimation and more ablation results.
We hope that our rebuttal can successfully address the reviewers' questions, and look forward to receiving the reviewers’ support for our work.
Pdf: /pdf/5b40dd925ead1f6bda4a5b4b4f8076bfc1afa56f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a new uncertainty estimation method for medical imaging segmentation tasks. The method relies on a pre-trained segmentation network that uses evidential learning to produce the parameters of a Dirichlet distribution over the class probabilities, which can be translated to aleatoric and epistemic uncertainty estimates. Then, an uncertainty estimation network is trained by updating its parameters based on a custom reward function weighted by the Fisher information matrix for all of the parameters. Experiments on two medical image segmentation datasets show that the method outperforms baselines including ensemble-based methods, deterministic uncertainty estimation methods, dropout, and an evidence update method. Ablations show the value of each part of the method.
[Note: I do not work on uncertainty estimation or reinforcement learning, so other reviewers may be more qualified than me to comment here, and it is definitely possible I have misunderstood some of the paper.]
Strengths: - The paper contains an extremely thorough ablation study and baseline comparison which demonstrate the value of each component of the system and a clear benefit over baselines (to be clear, this is a very big strength).
- The idea of incorporating the outputs of evidential learning in the segmentation output into the training of an uncertainty estimation network seems interesting and is to my knowledge (from a brief literature search -- I am not familiar with this field) novel.
Weaknesses: - **I do not understand how this is a reinforcement learning strategy, as opposed to standard training of another neural network that predicts uncertainty.** In particular, the form of the “RL Reward Maximization” part of Algorithm 1 looks exactly like standard training where network parameters are being updated via a loss function, and as far as I can tell, there is no notion of an “agent” performing actions in sequential time steps. I still think there is a contribution here, because the empirical results are clearly improved over the baselines, and there may be a misunderstanding on my part because I am not a reinforcement learning researcher. However, I think the paper may need a significant rewrite to clarify it and fully meet the bar for acceptance: either the paper should be written with the contribution being essentially a novel loss function, or the exposition needs some clarification in framing this as a reinforcement learning problem, as I did spend a good amount of time trying to understand this and could not figure it out. I am happy to update my rating later based on subsequent discussion on this point.
- **Some details of the ablation study are unclear.** It appears evidential learning was added after reward maximization. How was reward maximization performed without the aleatoric and epistemic uncertainties provided from the evidential learning task (i.e. was R computed as in Appendix A.1? If so, how?)?
- **OOD tasks are not realistic.** I do not think the OOD perturbations induced by Hendrycks et al are a realistic approximation to the types of OOD examples that would be seen in a medical segmentation task, where one would be especially concerned with unusual anatomy/pathology, etc. I don’t consider this a dealbreaker for accepting this paper, however, as it certainly seems reasonable that these OOD perturbations help in formulating the reward function; I am just less able to draw conclusions from the “OOD Inference” columns in the results tables.
- **Visual results are hard to interpret.** Figure 2 does not provide the ground truth segmentations, so it is hard to tell where the model is incorrect, especially for readers unfamiliar with these types of images. Figure 1 in Appendix A.4 does show ground truth segmentations, but it would be helpful to see an error map of the segmentations to compare with the uncertainty.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Why is this considered a reinforcement learning algorithm?
- How was reward maximization performed without the aleatoric and epistemic uncertainties provided from the evidential learning task?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper has a clear “Limitations” section which describes a desire to unify the framework for dealing with in-distribution and out-of-distribution examples. I would like to see some additional discussion of the fact that that the OOD evaluations are not fully representative of what might be encountered in medical image segmentation tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your positive comments on our thorough ablation study, clear benefits over baselines, and novel methodology. We would like to address each of your questions below.
> * Why is this considered a reinforcement learning algorithm?
Reply: Thank you for your comment. Our method aims to explicitly optimize uncertainty estimation metrics as a reward function to directly calibrate prediction risk and model confidence. However, achieving this in standard supervised learning is unfeasible due to the inherent discrete and non-differentiable characteristics of these metrics, which are necessary for effective back propagation. To overcome this limitation, our approach formulates the uncertainty estimation task as a reinforcement learning (RL) paradigm. This allows us to directly optimize uncertainty estimation metrics, like the Expected Calibration Error, that are both discrete and non-differentiable. This intricate process involves the formulation of a Markov Decision Process (MDP), where the segmentation model takes on the role of the policy. Within this framework, segmentation predictions are actions and the input image is the state, as elaborated in a recent work on tuning computer vision models with task rewards [31]. The policy performs the action (segmentation) with a single time step, thus there is no notion of sequential actions across time. The “RL Reward Maximization” part in Algorithm 1 actually differs from standard training in terms of feedback signal and policy optimization. In RL reward maximization, feedback is received in the form of rewards based on the performed action. The objective is to learn a policy that maximizes the rewards, which are usually discrete and non-differentiable. In standard training, the objective involves minimizing a loss function based on labeled data, which is continuous and differentiable. Furthermore, RL reward maximization typically involves optimizing a policy that maps states to actions, aiming to maximize rewards. Standard training focuses on optimizing parameters of a model to minimize a specific loss function. We appreciate your feedback and will provide a clearer explanation of our RL paradigm in the paper.
> * For the ablation study, how was reward maximization performed without the aleatoric and epistemic uncertainties provided from the evidential learning task?
Reply: For the ablation study, when evidential learning is not employed, it means that a softmax layer instead of an evidence layer is used in the segmentation backbone. Since the softmax layer cannot separate the aleatoric and epistemic uncertainties, following the baseline setting in previous methods [10], the maximum class probability (MCP) generated by the softmax function is used as both the aleatoric and epistemic uncertainty. This MCP-based uncertainty is subsequently used in the computation of the uncertainty-related reward function, which is maximized with our fine-grained parameter update RL algorithm.
> * The OOD perturbations are not a realistic approximation to the types of OOD examples that would be seen in a medical segmentation task.
Reply: Thanks for your comment. For the generation of out-of-distribution (OOD) examples, we randomly applied one out of four types of perturbations, including contrast, brightness, pixelate, and noise, to the original images. Among these, the perturbations of contrast, brightness, and pixelate are likely to be encountered in surgical video data, due to variations in the surgical environment. Some examples of generated OOD examples are presented in Fig. 3(a) of the uploaded PDF file. We can see that those perturbations mainly adjust the contrast, brightness, and clarity of surgical images, which might be encountered in practical scenarios. However, we acknowledge the reviewer’s suggestion that another important type of OOD cases for surgical video data would involve different anatomical structures or pathologies. We thus conducted additional tests on endoscopic submucosal dissection (ESD) surgical data, which involves organs distinct from those covered by our training data. Specifically, we applied the model trained on ESD surgical data of the stomach to test on ESD surgical data of the esophagus and the rectum. The uncertainty estimation results for these OOD cases are shown in Fig. 1(b) of the uploaded PDF file accompanying our global response. The figure demonstrates that our method effectively provides meaningful uncertainty estimation for OOD cases involving different organs.
> * It would be helpful to include ground truth segmentations in Figure 2 and error maps of segmentations in Figure 1 in Appendix A.4.
Reply: Thank you for your valuable suggestions. We have included the ground truth segmentations and error maps in Fig. 2. and error maps in Fig. 1 in Appendix A.4. The updated figures can be visualized in Fig. 1 and Fig. 2 of the uploaded PDF file. In Fig. 1, we can observe that our estimated uncertainty maps effectively highlight regions where the segmentation predictions are unreliable. For example, the bottom right areas in the first and second columns of Fig. 1 are incorrectly segmented, and our model assigns high uncertainty values to these regions accordingly. In Fig. 2, we can see that our model generates uncertainty estimation maps that present better correlation with incorrect predictions compared to other methods.
---
Rebuttal Comment 1.1:
Title: Terminology questions remain, but I have raised my score.
Comment: Thanks to the authors for their thorough responses to my and other reviews.
Thanks to the authors for highlighting the connection to reference [31] and clarifying their view of the reinforcement learning framework. The rebuttal states: “segmentation predictions are actions and the input image is the state” — to me, this still doesn’t seem like an RL framework, because the actions do not affect the state of the system. However, it seems to me that [31] is characterized by a similar issue, and reframing the contribution may make the paper much easier to understand by future readers. I do think this is a good paper that should be accepted — at this point we are just discussing a terminology difference. I will defer to the AC and other reviewers on this point and will raise my score.
I am satisfied with the response to all other points in my initial review, and especially appreciate the additional tests on endoscopic submucosal dissection (ESD) surgical data and improved visualizations of the results.
I have also been following the discussion with reviewer jxwz about comparison to other medical imaging segmentation baselines. While the original paper already contained a thorough empirical characterization of the results, I think the latest response in the rebuttal further strengthens the paper and I am satisfied with the response on that point.
---
Reply to Comment 1.1.1:
Title: Thank You for Your Response
Comment: Thank you for the support of our work and we are glad to see that you are satisfied with our responses. Regarding the RL framework terminology, we intended to align with reference [31], but we also understand and agree with your point that RL might not be too precise in our context. We would like to rephrase our uncertainty estimation problem as a reward optimization process which is solved using the RL algorithms, rather than formulating a RL system. This writing description will be clarified in the final version. Thanks again for the rigorous suggestion. | null | null | null | null | null | null |
Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models | Accept (poster) | Summary: This paper presents a novel method to connect vision-language instruction tuning with large language models.
To implement this, the authors introduce the adapters rather than heavy bottlenecks between vision-language input tokens and LLM.
A routing mechanism is designed to adaptively choose the right direction for different modalities of inputs.
After applying this method to the recent strong LLaMa model, the proposed method achieves efficient instruction tuning and close the performance gap with that of the original large language model.
Strengths: - The paper is well-written and most parts of this paper are easy to follow.
- The proposed method achieves significant performance improvement pertaining to instruction tuning while using a very small magnitude of trainable parameters.
- The proposed method demonstrates favorable results on both single- and multi-modal instructions.
Weaknesses: - My biggest concern lies in the routing mechanism.
- Why do we need the routing between different kinds of inputs? Can we just use determined inputs to the modality-aware adapters?
- What the role of ```s``` is is not well explained.
- It seems the router output is simply a weighted sum of the two adapters. How should we explain this?
- It seems the authors also introduce adapters to the image encoder. This should also be explained.
- For the image input features, the authors use the ```[CLS]``` of every fourth layer of CLIP-ViT, are there any rationales behind this?
- I'm confused about Sec. 3.2. It seems that Sec. 3.2 can be cohesively organized with that of MMT. Are there more considerations about this section? If not, the writing should be better organized.
- Some typos:
- Line 156, ```m``` and ```n``` are in reverse order;
- Line 170, the dimension definition is not correct.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see the weaknesses above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highly appreciate your careful review for this paper. Your beneficial feedback and valuable suggestions indeed improve our paper a lot. Below, we response to your key concerns point by point.
**Comment#1:**
Why do we need the routing between different kinds of inputs? Can we just use determined inputs to the modality-aware adapters?
**Response:**
Thanks for this insightful question. Compared to our MM-adapter, the shortcomings of modality-specific adapters lies in two aspects:
1. **Inference flexibility.** Modality-specific adapters require to manually set up the inference adapters according to the input modalities, while MM-adapter can automatically adjust its inference paths, which is more convenient.
2. **Training efficiency.** Modality-specific adapters can not be benefited from the mixture-of-modality training. In modality-specific adapters, each adapter is separately optimized by one kind of input, greatly reducing the training efficiency. Under the same training epoch, the performance of modality-specific adapters on ScienceQA is 83.76, obviously inferior to 89.41 of MM-adapter.
Overall, the proposed MM-adapter is still a better solution than modality-specific adapters in practice.
**Comment#2:**
What the role of s is not well explained.
**Response:**
''s'' is the scale factor to adjust the numeric magnitude [12] for stable training, which is set to 1 in our experiments. We will add more explanations in our final version.
**Comment#3:**
It seems the router output is simply a weighted sum of the two adapters. How should we explain this?
**Response:**
Thanks for this comment. Weighted summation is a simple yet effective way to accomplish the adapter routing in LLMs. As shown in our appendix, the routing weights become shaper as the the network goes deeper. We have also tried two alternative schemes, namely *mean routing* and *hard routing*, which achieves inferior performance than the weighted summation on ScienceQA.
| ScienceQA | Overall Acc |
| ------------------ | ------------ |
| Mean routing | 87.93 |
| Hard routing | 74.18 |
| Weighted Summation | 89.41 |
**Comment#4:**
It seems the authors also introduce adapters to the image encoder. This should also be explained.
**Response:**
Thanks for this insightful comment. This setting is also an important finding of our paper. To explain, the adapters in image encoder can help to narrow the domain gap between the pre-trained and down-stream images. For example, ScienceQA includes a large number of synthetic images, which are barely seen by the pre-trained image encoder before, *i.e.,* CLIP-ViT.
**Comment#5:**
For the image input features, the authors use the `[CLS]` of every fourth layer of CLIP-ViT, are there any rationales behind this?
**Response:**
Thanks for this careful review. Using [CLS] features at different scales can provide richer semantic information for multimodal tasks. To explain, semantic information of visual features usually varies at different depths of the network. For example, shallow layers typically encode low-level semantic, such as texture and color. In this case, the use of visual features from different layers will facilitate the learning of attribute words.
**Comment#6:**
I'm confused about Sec. 3.2. It seems that Sec. 3.2 can be cohesively organized with that of MMT. Are there more considerations about this section? If not, the writing should be better organized.
**Response:**
Thanks for your suggestion. Sec. 3.2 mainly describes the model details of LaVIN, which can be combined together with the descriptions of MMT. Following your suggestion, we will re-organize this section to improve the readability.
**Comment#7:**
- Some typos:
- Line 156, `m` and `n` are in reverse order;
- Line 170, the dimension definition is not correct.
**Response:**
Thanks for your careful review and we will revise these typos in our final version.
---
Rebuttal Comment 1.1:
Comment: I have no further concerns and would like to keep my original `weak accept` score. | Summary: This paper presents a cost-efficient method to fine-tune LLMs thus enabling their multimodal reasoning capabilities. The main technical contribution includes using Mixture-of-Modality Adapation, which adopts lightweight adapters to bridge the gap between modality gaps. In the meanwhile, MMA also allows automatic routing such that the model can process both multimodal prompts and text-only prompts. Experiments show superior results on ScienceQA and training efficiency, in terms of both training time and number of trainable parameters.
Strengths: - The proposed method may help significantly reduce the fine-tuning cost of LLMs with multimodal inputs.
- The proposed method is conceptually concise and generic. It could be potentially incorporated into different models/systems thus harvesting the development of both LLMs and vision encoders.
- Results on ScienceQA is promising, surpassing some strong competitors such as GPT-4.
Weaknesses: - Evaluation is not sufficiently convincing. Only quantitative results on ScienceQA are presented. This leads to a narrowed view of the multimodal capabilities of the fine-tuned model. For example, does the model perform equally well on image captioning datasets such as NoCaps, COCO; how does the model perform on cross-modal retrieval / other VQA benchmarks? Missing these results makes it difficult to understand whether the proposed fine-tuning paradigm is actually bridging the modality gap as good as some previous works.
- In L50 and also Figure 1(b), authors claim "this paradigm often requires to update most parameters of LLM, limiting the efficiency of VL instruction tuning", "these fine-tune schemes will inevitably undermine the NLP capabilities of LLMs due to the drastic changes in their parameter spaces" and "existing multimodal LLM do not support text-only instructions, greatly hindering their applications". I wouldn't say these are correct. For example, BLIP-2/MiniGPT4 both keep LLMs frozen without updating their parameters, thus one can always prompt its LLMs with text-only prompt without degradation in language generation quality. I am therefore not fully convinced by the motivation of having automatic routing, instead of having just frozen LLMs speaking of to preserve the text generation capabilities.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NA
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and efforts spent in this paper. Below, we response to your concerns point by point.
**Comment#1:**
Evaluation is not sufficiently convincing. Results on image captioning and VQA should be reported.
**Response:**
Thanks for this constructive comment. Following your suggestion, we conduct more experiments on other VL benchmarks, including your suggested COCO Captioning and VQA2.0, and zero-shot multimodal evaluation and zero-shot question answering. For a fair comparison, we follow the settings of previous works[B,18,37] on these benchmarks.
| Image Captioning *Karpathy test* | Pre-training Data | Updated Params | BLEU@4 | Cider |
| -------------------------------- | :---------------: | :------------: | :----: | :---: |
| ClipCap [A] | 0 | - | 33.5 | 113.1 |
| LLaMA-Adapter V2 [B] | 0 | 14M | 36.2 | 122.2 |
| LaVIN (ours) | 0 | 5.4M | 36.4 | 126.9 |
| BLIP [16] | 14M | 583M | 40.4 | 136.7 |
| BLIP-2 [15] | 129M | 188M | 43.7 | 145.3 |
| LaVIN (ours) | 0.6M | 5.4M | 37.8 | 131.7 |
| VQAv2 *val* | Overall | Number | Yes/no | Other |
| -------------------- | :-----: | :----: | :----: | :---: |
| LLaMA-adapter V2 [B] | 67.22 | 49.34 | 84.83 | 56.59 |
| LaVIN (ours) | 68.74 | 51.12 | 87.67 | 59.01 |
| Zero-shot ThruthfulQA [E] | Acc |
| ------------------------- | :--: |
| LLama [37] | 38.7 |
| LLaVA [18] | 16.4 |
| LLaMA-Adapter V2 [B] | 24.4 |
| LaVIN (ours) | 47.9 |
| Zero-shot MME benchmark [F] | Cognition | Perception | Overall |
| --------------------------- | :-------: | :--------: | :-----: |
| BLIP2 [15] | 1293.84 | 290 | 1583.84 |
| LLaMA-adapter v2 [B] | 972.67 | 248.93 | 1221.6 |
| MiniGPT-4 [48] | 866.58 | 292.14 | 1158.72 |
| LLaVA [18] | 502.82 | 214.64 | 717.46 |
| LaVIN (ours) | 963.61 | 249.64 | 1213.25 |
From the above tables, we can observe some important findings.
1. As as an parameter-efficient tuning method, our LaVIN are consistently better than LLaMA-Adapter v2 under both supervised and zero-shot settings. For example, LaVIN outperforms LLaMA-adapter V2 by +1.52% on VQAv2 and +23.5% on zero-shot TruthfulQA, respectively.
2. Compared with BLIP and BLIP-2 pre-trained on large-scale VL data, our performance is still competitive, while the expenditure is much cheaper. **Notably, our tuning only takes 4 GPU hours on 8 A100s, while BLIP-2 requires more than 300 GPU hours on 16 A100s.**
3. Our NLP ability also outperforms existing methods. On TruthfulQA, we can see that their zero-shot performance is obviously inferior to the original LLaMA. In stark contrast, LaVIN can further improve the performance by +8.3% than LLaMA through its mixture-of-modality adaptation.
Overall, we believe that these results can further validate the effectiveness and generalization ability of LaVIN. Following your suggestion, we will supplement these results and discussions to our final version.
**Comment#2:**
The claiming about "the NLP capabilities declines in fully tuned multimodal LLM'' are not correct, since BLIP-2/MiniGPT4 both keep LLMs frozen without degradation of NLP ability. I am therefore not fully convinced by the motivation of automatic routing, instead of having just frozen LLMs.
**Response:**
Thanks for your careful review. We agree that MiniGPT-4 and BLIP-2 are inappropriate examples to support the argument of "full VL fine-tuning". Nevertheless, their shortcomings lie in their expensive parameter and training costs, as discussed in Line 49-50 of our paper. For instance, BLIP-2 requires a deep neck branch to connect the vision and language models, which needs to be pre-trained on massive VL data for over 200 hours on 16 A100s. In stark contrast, our LaVIN can only update 5.4M parameters and require only about 2 hours on 8 A100s.
Aside from the inappropriate examples, our argument about full VL tuning still stands. Full VL tuning is still a popular paradigm to adapt LLMs to VL tasks [18,G,H,I,J], and it does greatly undermine the NLP capabilities of the used LLMs. For instance, after being fully tuned on VL data, the performance of LLaVA greatly declines on the zero-shot TruthfulQA, a NLP benchmark, especially compared with the default LLaMA.
Meanwhile, the common adapter-based schemes are also sub-optimal. When using the adapter VL data to handle examples of mixed modalities, its performance is still greatly affected on NLP evaluations, see the following table. Besides, the performance of existing adapter-based methods like LLaMA-Adapter significantly lags behind that of fully tuned approach, *i.e.,* LLaVA, see our comparison on ScienceQA.
Compared with the above paradigms, our MM-Adaption provides an elegant yet effective way to automatically handle the inputs of mixed modalities, *i.e.,* dynamic routing. **Notably, on zero-shot TruthfulQA, our NLP performance is even obviously better than the frozen LLaMA.**
[A] ClipCap: CLIP Prefix for Image Captioning
[B] LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model、
[E] TruthfulQA: Measuring How Models Mimic Human Falsehoods
[F] MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
[G] Otter: A Multi-Modal Model with In-Context Instruction Tuning
[H] Shikra: Unleashing Multimodal LLM’s Referential Dialogue Magic
[I] LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
[J] GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response and additional results.
The fact that LAVIN significantly underperforms BLIP-2 (and on some datasets even BLIP) diminishes its contributions, despite with fewer computation cost and data required. It may also be possible that these full fine-tuning models can quickly reach comparable performance to LAVIN with reduced data and compute needed.
I'd better appreciate the parameter-efficient training methods if they manage to at least come close to full model tuning results. Otherwise I doubt their values in advancing the field. I decide to maintain the original recommendation as such.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply. Actually, under comparable settings, LaVIN can already achieve comparable performance with the fully tuned model, i.e., LLaVA, on ScienceQA. However, the performance of LaVIN still lags behind BLIP-2 due to the lack of large-scale pre-training, which is not the main focus of this paper. We believe that large-scale pre-training will greatly boost the performance of LaVIN.
Besides, our contribution is actually orthogonal to BLIP-2. Our mixture-of-modality adaptation can also apply to BLIP during its fine-tuning stage. As discussed in our rebuttal, frozen LLMs are still suboptimal for multimodal adaption. With the help of LaVIN, the NLP ability of BLIP-2 can be further improved.
Based on these aspects, we believe that LaVIN can still be an efficient yet effective multi-modal adaptation strategy for LLMs, which is indeed valuable to the community. | Summary: > Update: I bumped up my rating to 6 after rebuttal
This paper proposes, LaVIN, an efficient and effective vision-language instruction tuning scheme to adapt LLMs. Specifically, the authors utilize parameter-efficient modules to adapt the LLaMA LM – they insert several adapters to the image encoder and mixture of modality adapters to the LM, the LM is expected to automatically select and route through the adapters of different modalities. LaVIN can be trained in an end-to-end fashion. Because only adapters are learned during training, LaVIN training is much more efficient than LLaVA that uses full tuning. Instruction tuning in LaVIN includes both text-image data and text-only data in a multi-task fashion (but with better separation due to adapters) to enable text-image instructions and text-only instruction at inference time. Strong results are achieved on ScienceQA.
Strengths: 1. The proposed method is simple and sound. By the use of a bunch of adapters, the model can naturally learn different-modality instructions in an end-to-end fashion.
2. LaVIN is much more efficient than LLaVA.
3. Empirical results on ScienceQA are strong.
4. The paper is well-written.
Weaknesses: 1. I think the proposed method resembles LLaMA-adapter a lot, maybe the authors should better note their difference early in the paper – currently LLaMA-adapter is not described until the experiment.
2. The experimental comparison with LLaMA-Adapter is not an apple-to-apple comparison because LLaMA-Adapter and LaVIN use different instruction tuning datasets – the performance gap may simply originate from the instruction tuning datasets. A better baseline should be using LLaMA-Adapter on the same instruction tuning datasets.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: As mentioned in the weakness section, I think LLaMA-adapter should be described early in the paper and a fair comparison with it is needed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have discussed the limitations of the proposed method in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highly appreciate your time and effort in reviewing this paper, and also thanks for positive rating and beneficial feedback. Below, we response to your key concerns point by point.
**Comment#1:**
I think the proposed method resembles LLaMA-adapter a lot, maybe the authors should better note their difference early in the paper – currently LLaMA-adapter is not described until the experiment.
**Response:**
Thanks for your suggestion. The main differences between our work and LLaMA-adapter lies in the following aspects:
1. LLaMA-adapter is a static multi-modal approach, while our MM-Adapter can dynamically adjust its adaption across single-and multi-modal tasks. This dynamic routing design can not only help LLMs retain the NLP ability but also facilitate the cross-modal instruction tuning, see the following table..
2. In our mixture-of-modality adaptation (MMA), we reveal the importance of end-to-end optimization for multi-modal LLMs, *i.e.,* adding more adapters into visual backbone, and also adopts the mixed modality training to improve the cross-modal training. These innovative designs are also greatly different from LLaMA-adapter.
Based on these differences, we think that our contribution is orthogonal to LLaMA-adapter. Meanwhile, to our best knowledge, our MMA is currently the most efficient transfer learning scheme for LLaMA, which only takes 1.4 GPU hours to adapt to ScienceQA.
Following your suggestion, we will highlight the above differences in our new version.
| Zero-shot ThruthfulQA [E] | Acc |
| ------------------------- | :--: |
| LLama [37] | 38.7 |
| LLaVA [18] | 16.4 |
| LLaMA-Adapter V2 [B] | 24.4 |
| LaVIN (ours) | 47.9 |
| Zero-shot MME benchmark [F] | Cognition | Perception | Overall |
| --------------------------- | :-------: | :--------: | :-----: |
| BLIP2 [15] | 1293.84 | 290 | 1583.84 |
| LLaMA-adapter v2 [B] | 972.67 | 248.93 | 1221.6 |
| MiniGPT-4 [48] | 866.58 | 292.14 | 1158.72 |
| LLaVA [18] | 502.82 | 214.64 | 717.46 |
| LaVIN (ours) | 963.61 | 249.64 | 1213.25 |
**Comment#2:**
The experimental comparison with LLaMA-Adapter is not an apple-to-apple comparison because LLaMA-Adapter and LaVIN use different instruction tuning datasets – the performance gap may simply originate from the instruction tuning datasets. A better baseline should be using LLaMA-Adapter on the same instruction tuning datasets.
**Response:**
Thanks for this suggestion. In fact, the experimental comparison on ScienceQA and Multi-modal Chatbot are relatively fair for LLaMA-Adapter and our LaVIN. Particularly, both models are trained from scratch on ScienceQA and use the same text instruction data on Multi-modal Chatbot.
To further address your concerns, we also supplement the comparisons on COCO Captioning and VQA2.0, of which experimental settings are the same for both methods. It can be seen that LaVIN is consistently better than LLaMA-Adapter in both performance and efficiency.
| COCO Captioning | Updated Params | BLEU@4 | Cider |
| -------------------- | -------------- | ------ | ----- |
| ClipCap [A] | - | 33.5 | 113.1 |
| LLaMA-adapter V2 [B] | 14M | 36.2 | 122.2 |
| LaVIN (ours) | 5.4M | 36.4 | 126.9 |
| VQAv2 | Overall | Number | Yes/no | Other |
| -------------------- | ------- | ------ | ------ | ----- |
| LLaMA-adapter V2 [B] | 67.22 | 49.34 | 84.83 | 56.59 |
| LaVIN (ours) | 68.74 | 51.12 | 87.67 | 59.01 |
[A] ClipCap: CLIP Prefix for Image Captioning
[B] LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response! The added results on additional datasets are very helpful. Can you clarify one further question:
Regarding the backbone models of LLaMA-adapter and LaVIN, can you confirm that the image encoder and language decoder in LLaMA-adapter and LaVIN are exactly the same?
---
Reply to Comment 1.1.1:
Comment: Thanks a lot for your reply. Yes, LaVIN and LLaMA-V2 use the same image encoder, i.e., ViT-L/14 from CLIP. The text decoder is also kept the same in most experiments. i.e., LLaMA-7B, expect for zero-shot MME, where LLaMA-13B is used in LaVIN. | Summary: This paper proposes Mixture-of-Modality Adaptation (MMA), which adopts lightweight adapters to bridge the gap between LLMs and VL tasks. The adapter utilizes a router to automatic shift between single-modal and multi-modal instructions. When applying MMA to LLaMA and training on both single-modal and multi-modal data, their proposed approach achieves competitive performance with supervised methods on ScienceQA dataset. Besides, the training of MMA is efficient and cheap.
Strengths: 1. The mixture-of-modality adapter could dynamically adjust the adaptations for single-modal inputs and uni-modal inputs, which help preserve the NLP capability of LLMs.
2. Achieve competitive performance given small number of training parameters.
3. Ablation studies support the effectiveness of mixture-of-modality training and mixture-of-modality adaptation.
Weaknesses: 1. The adapter idea has been extensively explored in previous efficient VL training, and using adapter to efficiently bridge vision and LLM has been explored in LLaMA-Adapter.
2. Missing discussion/ablation of how the router in the MMA helps with the LLM learn visual information while preserving NLP capability.
3. This paper only evaluates on ScienceQA. Evaluation on other benchmarks like (COCO Caption) is needed for better comparison with LLaMA-Adapter.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NA
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your careful review and constructive suggestions for this paper. Below, we response to your key concerns point by point.
**Comment#1:**
The adapter idea has been extensively explored in previous efficient VL training, and using adapter to efficiently bridge vision and LLM has been explored in LLaMA-Adapter.
**Response:**
Thanks for this comment. Compared with existing adapters, the main difference of our mixture-of-modality adapter (MM-Adapter) lies in its dynamic routing property. Unlike previous works, MM-Adapter can automatically switch between single- and multi-modal tasks, which can not only retain the NLP ability of LLaMA but also facilitate the cross-modal instruction tuning , see table 2.
In this paper, we also explore the mixture-of-modality training and reveal the importance of end-to-end optimization, *e.g.,* adding adapters for the visual backbone, which are also the key differences to LLaMA-Adapter.
With these innovative designs, our LaVIN is obviously better than LLaMA-Adapter on various single- and multi-modal benchmarks, of which results are given in below. Notably, to our best knowledge, our method is also the most efficient multimodal adaptation scheme for LLaMA, which only takes 1.4 hours to adapt to ScienceQA.
| COCO Captioning | Updated Params | BLEU@4 | Cider |
| -------------------- | -------------- | ------ | ----- |
| ClipCap [A] | - | 33.5 | 113.1 |
| LLaMA-Adapter V2 [B] | 14M | 36.2 | 122.2 |
| LaVIN (ours) | 5.4M | 36.4 | 126.9 |
| VQAv2 *val* | Overall | Number | Yes/no | Other |
| -------------------- | ------- | ------ | ------ | ----- |
| LLaMA-adapter V2 [B] | 67.22 | 49.34 | 84.83 | 56.59 |
| LaVIN (ours) | 68.74 | 51.12 | 87.67 | 59.01 |
| Zero-shot ThruthfulQA [E] | Acc |
| ------------------------- | ---- |
| LLama [37] | 38.7 |
| LLaVA [18] | 16.4 |
| LLaMA-Adapter V2 [B] | 24.4 |
| LaVIN (ours) | 47.9 |
**Comment#2:**
Missing discussion/ablation of how the router in the MMA helps with the LLM learn visual information while preserving NLP capability .
**Response:**
Thanks for this constructive comment. As discussed above, the router in MMA can automatically choose the specific adapters for the single or multi-modal inputs, thus our MM-Adapters can learn visual information without contradicting to the default NLP capability of LLaMA.
In MMA, we also adopt a mixture-of-modality training (MMT) regime to jointly train MM-Adapters on data of different modalities, which can further improve the NLP and VL capabilities of our LaVIN. For instance, compared with LLaMA-Adapter and LLaMA-Adapter v2, our LAVIN demonstrates consistently better performance on four benchmark datasets, including COCO captioning, VQAv2, ScienceQA and TruthfulQA.
Following your suggestion, we will add the above discussions to our final submission.
**Comment#3:**
This paper only evaluates on ScienceQA. Evaluation on other benchmarks like (COCO Caption) is needed for better comparison with LLaMA-Adapter.
**Response:**
Thanks for this suggestion. The comparisons with LLaMA-adapter v2 on COCO captioning and VQA2.0 are given in the following tables.
From these results, we can see that our LaVIN is consistently better than LLaMA-Adapter v2 in both performance and parameter efficiency.
| COCO Captioning | Updated Params | BLEU@4 | Cider |
| -------------------- | :------------: | :----: | :---: |
| ClipCap [A] | - | 33.5 | 113.1 |
| LLaMA-adapter V2 [B] | 14M | 36.2 | 122.2 |
| LaVIN (ours) | 5.4M | 36.4 | 126.9 |
| VQAv2 *val* | Overall | Number | Yes/no | Other |
| -------------------- | ------- | ------ | ------ | ----- |
| LLaMA-adapter V2 [B] | 67.22 | 49.34 | 84.83 | 56.59 |
| LaVIN (ours) | 68.74 | 51.12 | 87.67 | 59.01 |
[A] ClipCap: CLIP Prefix for Image Captioning
[B] LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
[E] TruthfulQA: Measuring How Models Mimic Human Falsehoods | Rebuttal 1:
Rebuttal: Dear Reviewers:
We thank all reviewers for their valuable and encouraging comments on the novelty and technical contributions of our paper, such as *"it surpasses some existing models that have larger size"*, *"achieve competitive performance given small number of training parameters "*, *"LaVIN is much more efficient than LLaVA"* and *" favorable results "*. During the rebuttal phrase, our main responses include:
1. Additional evaluations on new VL benchmarks, including COCO captioning, VQAv2, TruthfulQA and MME Benchmark, which further validate the effectiveness and generalization of our method.
2. The extensive comparisons with LLaMA-Adpater on four additional benchmarks, which well confirm our superiority over this similar PETL methods.
3. The active discussion about the role of our cross-modality routing design, which is more convenient and effective than existing solutions towards handling the inputs of mixed modalities.
In addition, the key concerns of all reviewers are point-by-point addressed in each rebuttal.
Lastly, the codes of new experiments in our response are anonymously released at: https://anonymous.4open.science/r/LaVIN--1067, and the technical details can be found in the attachment.
Best,
The Authors
Pdf: /pdf/fc9b051ab7d2a03d903ed94468d1f313bc253f02.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents a novel method to do efficient vision language fine-tuning. Through a mixture of modality adaptation mechanism, the model can close the gap between different modalities. Additionally, the paper proposes a routing algorithm to switch between multiple
tasks. The training cost of the proposed system is low as the number of total trainable parameters is less than 4M. The model has been evaluated on public benchmark of ScienceQA.
Strengths: - The proposed method is efficient. The total trainable parameters are less than 4M.
- The mixture-of-modality adaptation mechanism provides a way to adapt the LLM to vision modalities without expensive VL pretraining.
- The proposed method has been evaluated on public available benchmark, achieves comparable results, and for some of the evaluation metrics, it surpasses some existing models that have larger sizes
Weaknesses: For weaknesses, please see my questions in below section.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - The model is only tested on one dataset, which might not be convincing enough regarding the effectiveness of the model. Can the authors evaluate the model using more public datasets?
- The proposed model utilizes two adapters and one routing function to decide which adapter to use. The features generated after the routing function is affected by scale factor, will the performance be sensitive with the scale factor? Additionally, the routing function is
defined as a weighted summation, is there any other option that performs better?
- How to make it possible to extend the model to more modalities, e.g. video, audio signal.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highly appreciate your time and effort in reviewing this paper, and also thanks for your constructive comments on our work. Below, we response to your concerns point by point.
**Comment#1:**
The model is only tested on one dataset, which ight not be convincing enough regarding the effectiveness of the model.
**Response:**
Thanks for your this comment. Following your suggestion, we supplement a more comprehensive comparison of LaVIN on COCO Captioning [K], VQA2.0 [L], TruthfulQA [E] and MME Benchmark [F], and compare LaVIN with three representative approaches, namely LLaMA-adapter V2 [B], BLIP [16] and BLIP-2 [15]. The results are reported in the following tables.
| Image Captioning *Karpathy test* | Pre-training Data | Updated Params | BLEU@4 | Cider |
| -------------------------------- | :---------------: | :------------: | :----: | :---: |
| ClipCap [A] | 0 | - | 33.5 | 113.1 |
| LLaMA-Adapter V2 [B] | 0 | 14M | 36.2 | 122.2 |
| BLIP [16] | 14M | 583M | 40.4 | 136.7 |
| BLIP-2 [15] | 129M | 188M | 43.7 | 145.3 |
| LaVIN (ours) | 0 | 5.4M | 36.4 | 126.9 |
| LaVIN (ours) | 0.6M | 5.4M | 37.8 | 131.7 |
| VQAv2 *val* | Overall | Number | Yes/no | Other |
| -------------------- | :-----: | :----: | :----: | :---: |
| LLaMA-adapter V2 [B] | 67.22 | 49.34 | 84.83 | 56.59 |
| LaVIN (ours) | 68.74 | 51.12 | 87.67 | 59.01 |
| Zero-shot ThruthfulQA [E] | Acc |
| ------------------------- | :--: |
| LLama [37] | 38.7 |
| LLaVA [18] | 16.4 |
| LLaMA-Adapter V2 [B] | 24.4 |
| LaVIN (ours) | 47.9 |
| Zero-shot MME benchmark [F] | Cognition | Perception | Overall |
| --------------------------- | :-------: | :--------: | :-----: |
| BLIP2 [15] | 1293.84 | 290 | 1583.84 |
| LLaMA-adapter v2 [B] | 972.67 | 248.93 | 1221.6 |
| MiniGPT-4 [48] | 866.58 | 292.14 | 1158.72 |
| LLaVA [18] | 502.82 | 214.64 | 717.46 |
| LaVIN (ours) | 963.61 | 249.64 | 1213.25 |
From the above tables, we can observe some important findings.
1. As as an parameter-efficient tuning method, our LaVIN are consistently better than LLaMA-Adapter v2 under both supervised and zero-shot settings. For example, LaVIN outperforms LLaMA-adapter V2 by +1.52% on VQAv2 and +23.5% on zero-shot TruthfulQA, respectively.
2. Compared with BLIP and BLIP-2 pre-trained on large-scale VL data, our performance is still competitive, while the expenditure is much cheaper. For instance, with only 0.6M pre-training data and 5.4M updated parameters, LAVIN can achieve 131.7 CIDEr on COCO Captioning. **Notably, our tuning only takes 4 GPU hours on 8 A100s, while BLIP-2 requires more than 300 GPU hours on 16 A100s.**
3. Our NLP ability also outperforms existing methods. As discussed in the main paper, the NLP capabilities of most existing multimodal LLMs are often undermined during VL instruction tuning. On TruthfulQA, we can see that their zero-shot performance is obviously inferior to the original LLaMA. In stark contrast, LaVIN can further improve the performance by +8.3% than LLaMA through its mixture-of-modality adaptation.
Overall, we believe that these results can further validate the effectiveness and generalization ability of LaVIN. Following your suggestion, we will supplement these results and discussions to our final version.
**Comment#2:**
Will the performance be sensitive with the scale factor?
**Response:**
Yes, the selection of scale factor will somewhat affect the performance. According to LoRA [12], the value of scale factor is positively related to the learning rate, but it does not need to be specifically tuned.
**Comment#3:** Is there any other option for routing functions that performs better?
**Response:**
Thanks for this insightful question. In fact, we ever tried two alternative solutions for LaVIN, namely *mean-routing* and *hard-routing*. In particular, mean-routing refers to the average output of the two adapters and is static for all examples. Compared with our MMA, hard-routing is also dynamic, but only selects one adapter at each inference step. This binary routing scheme is often more difficult to optimize [M].
As shown in the following table, these alternatives are obviously inferior to our MMA, *i.e.,* the weighed summation.
| ScienceQA | Overall Acc |
| ------------------------- | :----------: |
| Mean routing | 87.93 |
| Hard routing | 74.18 |
| Weighted Summation(ours) | 89.41 |
**Comment#4:**
How to extend the model to more modalities, e.g. video, audio signal ?
**Response:**
Yes, the extension of LaVIN can be summarized as the following steps:
1. Add the backbones of the new modalities for feature embedding.
2. Mix the training data of different modalities.
3. Add more routing paths in LLM to adapt different modalities.
4. Conduct multimodal training via our mixture-of-modality training scheme.
Based on these steps, we have successfully extended LaVIN to the video data, which will be discussed in our next work.
[A] ClipCap: CLIP Prefix for Image Captioning
[B] LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
[E] TruthfulQA: Measuring How Models Mimic Human Falsehoods
[F] MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
[K] Microsoft COCO: Common Objects in Context
[L] Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
[M] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware | null | null | null | null | null | null |
Rewiring Neurons in Non-Stationary Environments | Accept (spotlight) | Summary: The paper presents a bio-inspired rewiring technique to improve deep reinforcement learning (DRL), especially in continual learning in non-stationary environment. The rewiring is implemented using permutation matrix P for all the hidden layers in MLP. There are several benefits. First, by using a set of different wirings, the agent executes more diverse policy which is claimed to improve exploration. Second, rewiring with a novel regularization trick can alleviate the stability-plasticity dilemma in continual learning. The effectiveness of the second claim is proven by robotic control experiments. The proposed rewiring method achieves SOTA with less network parameters. Ablation studies are also presented for better understanding the technique.
---- post rebuttal -----
I appreciate the authors' reply. I lead toward acceptance.
Strengths: - The writing is clear and easy to follow
- The idea is interesting and well motivated
- The proposed method is well evaluated with diverse tasks and ablation studies
- Connections with neuroscience is well discussed
Weaknesses: - The claim in 3.3 lack supporting evidences. Although ablation studies show a performance degragation without multi-mode, it is unclear how it relates to "exploration". There are various ways of improving exploration, e.g., simply using pink noise for action (https://openreview.net/forum?id=hQ9V5QN27eS). If the author would like to claim their methods' advance on exploration, comparison with other methods should be performed. And there should be empirical or theoretical evidence to show such an advantage in terms of "enable the agents to explore unseen environment (line 168)" rather than Fig.3 or overall performance.
- The proposed method achives similar performance (I would say no statistically significant difference from CSP) while using less parameters, which is a good thing. However, since the network is small (a few FC layers of width 256), such an advantage is not obviously important. I believe it would be much more exciting if the authors can show that (1) rewiring largely outperforms CSP with simular model size (could be very small network that can work onsingle chip microcomputer) or (2) performs similar or better than CSP with less energy consumption in tasks that require heavy computation resource.
- Although MLP is a foundamental neural network structure, it is unclear wether the rewiring technique works well for Transformers, CNNs, RNNs, etc.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - What's the computational cost and time consumption of your method comparing to others despite having less model parameters?
- Is your method compatible with evolutionary algorithms if using non-differentiable sorting for P?
- What's the scalability of the rewiring technique? More specifically, will the performance improve if you make your model larger (comparable with CSP in Table 1)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed limitation reasonably in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the very constructive comments. Our responses are provided below.
[W1] Justification for multi-mode strategy
* In Figure 2a of the global response, we compare the exploration efficacy of our multi-mode strategy against pink noise [1]. While the single-mode baseline with pink noise exhibits rapid initial learning, its performance plateaus over time. In contrast, our full method with multi-mode strategy effectively avoids this suboptimal situation and achieves the highest final performance.
* In addition, we recommend that the reviewer refer to Figure 5 and line 299 in the main paper, where a similar empirical justification for multi-mode rewiring is provided.
[W2] Additional comparison with CSP
* To compare at "similar model sizes", we experiment with CSP using a reduced network width of 175 (denoted as CSP-S). This aligns the model size of the two methods on the HalfCheetah/forgetting scenario. As shown in the results below, our method achieves slightly higher mean performance than CSP-S at similar small sizes.
| | H/F performance | 95% confidence interval | Model size |
| ----- | --------------- | ------------ | ---------- |
| CSP-S | 1.27 $\pm$ 0.15 | [1.14, 1.32] | 2.3 |
| Ours | 1.31 $\pm$ 0.21 | [1.11, 1.40] | 2.1 |
[W3] Exploration of network architectures
* Currently, our work is focused on MLP due to its standard use in the continual RL benchmarks, and the observations we've made already provide important insights into the field. Meanwhile, we recognize the value of exploring rewiring techniques in other architectures such as Transformers and CNNs, possibly moving on to datasets like Split ImageNet. We will continue to investigate bio-inspired approaches like rewiring with more sophisticated architectures and tasks in the future.
[Q1] Computational efficiency
* In Appendix A.4, we have shown that our method outperforms CSP in terms of time efficiency. Here is another comparison using multiply-add operations (MACs) on the HalfCheetah/forgetting scenario:
| | MACs (M) | Model size |
| ---- | -------- | ---------- |
| FT-1 | 0.14 | 1.0 |
| PNN | 1.08 | 8.0 |
| CSP | 0.63 | 4.5 |
| Ours | 0.48 | 2.1 |
As can be seen, our method has a lower computational cost than both PNN and CSP, despite the need for two forward passes (to compute the distillation loss $L_{\textrm{KL}}$ between wirings).
[Q2] Compatibility with evolutionary algorithms
* Thank you for bringing attention to this line of work [2,3]. Integrating evolutionary algorithms into our model appears to be difficult, due to the considerable engineering effort required for their fitness functions. And the efficiency may not be comparable to differentiable sorting. Nevertheless, evolutionary algorithms represent a more biologically plausible solution, since there is no clear evidence that the brain implements differentiable algorithms, so they are still worth exploring.
[Q3] Scalability
* We validate the scalability of our method by expanding the network width to 384 (denoted as Ours-L), which allows a direct comparison with the original CSP results as follows:
| | H/F performance | 95% confidence interval | Model size |
| ------ | --------------- | ------------ | ---------- |
| Ours | 1.31 $\pm$ 0.21 | [1.11, 1.40] | 2.1 |
| Ours-L | 1.38 $\pm$ 0.10 | [1.31, 1.42] | 4.6 |
| CSP | 1.41 $\pm$ 0.07 | - | 4.5 |
The comparison shows that Ours-L has a noticeable performance improvement, closing the gap with CSP. For a more intuitive illustration, please see Figure 1b in the global response.
References
[1] Eberhard, O., Hollenstein, J., Pinneri, C., & Martius, G. (2023). Pink noise is all you need: Colored noise exploration in deep reinforcement learning. In *International Conference on Learning Representations*.
[2] Scharnow, J., Tinnefeld, K., & Wegener, I. (2005). The analysis of evolutionary algorithms on sorting and shortest paths problems. *Journal of Mathematical Modelling and Algorithms*, *3*, 349-366.
[3] Bassin, A., & Buzdalov, M. (2020). The (1+($\lambda$, $\lambda$)) genetic algorithm for permutations. In *Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion* (pp. 1669-1677).
---
Rebuttal Comment 1.1:
Comment: Thanks for replying and conducting additional experiments. However, there still lack direct evidence of "multi-mode enables the agents to explore unseen environment". The final performance is increased, In Figure 2a of the global response, it shows using pink noise . According to my experience in deep RL, better exploration means faster learning at the begining since it can collect more diverse experience quicker. But multi-mode shows a slower learning than pink-noise at the begining, which makes me concern that the performance gain is not due to better exploration but other reasons.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your response, we would like to address your concerns as follows.
* We assume that "direct evidence" of exploration refers to visualization of state space coverage, like the 2D trajectories in [1, 2]. However, this is not feasible given the high-dimensional spaces of continual RL tasks. As an alternative, we use policy diversity (Figure 3) and performance evolution (Figure 5) to demonstrate exploration. For a more rigorous description, line 168 will be revised to "multi-mode rewiring enables the agent to explore various policies".
* Our results resonate with the well-known exploration-exploitation tradeoff: exploration may yield lower short-term rewards, but higher long-term rewards [3]. The reviewer commented that "better exploration means faster learning at the beginning since it can collect more diverse experience quicker". However, several deep RL results, such as Figure 7 (left) in [2], Figure 5 (right) in [4], and Figures 4 (left) & 5 (left) in [5], also support that exploration can exhibit slower learning at the beginning.
* Lastly, it is worth noting that our rewiring method is beneficial even when faster learning is prioritized, as the rewiring-based methods (Rewire and Ours) consistently outpace the non-rewiring baseline (FT) in Figure 5.
References:
[1] Houthooft, R., Chen, X., Duan, Y., Schulman, J., De Turck, F., & Abbeel, P. (2016). VIME: Variational information maximizing exploration. *Advances in Neural Information Processing Systems*, *29*, 1117-1125.
[2] Eysenbach, B., Gupta, A., Ibarz, J., & Levine, S. (2019). Diversity is all you need: Learning skills without a reward function. In *International Conference on Learning Representations*.
[3] Sutton, R. S., & Barto, A. G. (2018). *Reinforcement Learning: An Introduction*. MIT press.
[4] Pathak, D., Gandhi, D., & Gupta, A. (2019). Self-supervised exploration via disagreement. In *International Conference on Machine Learning* (pp. 5062-5071).
[5] Badia, A. P., Sprechmann, P., Vitvitskyi, A., Guo, D., Piot, B., Kapturowski, S., ... & Blundell, C. (2020). Never give up: Learning directed exploration strategies. In *International Conference on Learning Representations*. | Summary: The authors propose a means to efficiently expand the capacity of a neural network, namely connection permutations. The approach interleaves permutations matrices between layers of a neural network, such that input-output relationships can be adapted during learning in addition to learning the weight matrices. The authors focus in particular on RL in non-stationary environments, where adapting to new tasks can result in catastrophic forgetting of older tasks.
The permutations are learned with an existing differentiable approximation to the argsort operator. In addition, the authors propose to (a) allow the agent to cache permutations from previous tasks; (b) have the agent sample from multiple permutations to encourage exploration; and (c) align cached permutations with the latest weight matrices to ensure that previously learned policies are still consistent with the latest network, preventing catastrophic forgetting. Evaluating on a collection of continual RL tasks, the authors show SOTA or near-SOTA performance while using significantly fewer parameters than other techniques.
Strengths: Originality:
The argsort relaxation is an existing method, and differentiable permutations have been used for sorting inputs and in the attention maps of transformers. The authors also point out that neuron permutations have previously been used to explore network alignment. However, a permutation-based approach to network topology does not appear to have been exploited to manage the stability-plasticity tradeoff. This is a novel insight and a meaningful step beyond the existing literature, to the best of my knowledge.
Clarity:
Overall I found the presentation of the main idea to be clear, with helpful diagrams, although there were some issues on specific points (see below).
Significance:
The application of rewiring alongside weight learning has potential impact beyond non-stationary RL, including in other continual learning tasks, RL with exploration and either RL or non-RL tasks comprised of modular sub-tasks. For this reason, the proposed method is a valuable contribution to the literature.
Weaknesses: Eq. 11 appears to require caching the previous state of the weights as well as the permutations, which could be a significant memory overhead for large networks.
It's not clear to me, but it appears that the weights are cached on presenting a new task in a rule-based way; i.e. an external signal to the network indicating that the agent has started a new task. Does this mean that the model technically receives more information than the baselines, in the form of a signal to cache?
One clarity issue was in the computational neuroscience vs. ML direction of the paper. Based on the title in particular, I expected the focus to be on biologically plausible non-stationary RL / navigation, and it took me well into the Related Work to realize that it was primarily an ML paper. I would probably include a reference to permutation in the the title if you want it to be picked up by the ML and RL communities (e.g. "Permuting Neurons in Non-Stationary Reinforcement Learning"). On the other hand, I think the title is fine if the authors primarily expect to go down the comp neuro / bio plausible route with this work as suggested in the Conclusions.
I found the definitions of the ablations to be somewhat confusing in part because they were presented differently in the main text than the paper. The paper starts by describing rewiring and multi-modality together in 3.3, then caching and alignment together in 3.4, while the ablation table has rewiring, then caching, then multi-modality, then alignment. Can you include paragraph sub-headers in the description for each of these interventions in 3.3 and 3.4 to make it easier for the reader to turn back and match up with the ablation table?
Can you re-state what K and (especially) alpha and beta correspond to in the subtitle of Figure 6?
Overall I think the paper is strong. A higher rating (9/10) would likely require a more theoretically rigorous analysis of the impact of permutations on network capacity, a demonstration of lift on tasks outside of RL / non-stationary RL, or both.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - See the point above about whether the model indirectly receives information about when to cache the weights. If so, are there means (e.g. information-theoretic surprise) by which the model could detect a task change and initiate weight caching itself?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors include a short assessment of Limitations in the Conclusions. I don't foresee any significant negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the very constructive comments. We would clarify as follows.
[W1] Memory overhead
* Our method requires caching one previous weight $\boldsymbol{W}^{t-1}$ and all previous permutations. Among them, the weight accounts for most of the overhead, while the permutations prove to be highly parameter-efficient (see lines 142 and 186). Overall, the memory cost is only slightly more than twice the base size, remaining competitive with other existing methods.
[W2] Issue with task identifiers
* The model indeed receives an "external signal" indicating the start of a new task, known as the task identifier. However, it is commonly used by the compared methods. For instance, PNN and CSP expand the model upon each new task, while EWC computes the Fisher information matrix at the end of each task. Therefore, our method does not acquire more information than these baselines.
[W3] Direction in computational neuroscience vs. ML
* This paper aims to approach an ML problem from a bio-inspired route, with "Rewiring Neurons" in the title underscoring our neuroscience motivation. We've touched on some related discussion in Section 3.5 and the Conclusion, and will continue to revise the paper to make our intent clearer.
[W4] Presentation of the ablation table.
* We apologize for the different order of the ablation table, as it goes from rewiring (Section 3.2), then caching (Section 3.4), then multi-mode (Section 3.3), then alignment (Section 3.4). To help readers locate each design in the main text, we will add two paragraph sub-headers "Caching each wiring" and "Aligning wirings with weights" in Section 3.4.
[W5] Hyperparameters in Figure 6
* $K$ is the number of modes in multi-mode rewiring (Eq. 5), $\alpha$ is the coefficient for the distillation loss $L_{\textrm{KL}}$ (Eq. 6), and $\beta$ is the coefficient for the regularization loss $L_{\textrm{SP}}$ (Eq. 11). For clarification, we will re-state the definition of $\alpha$ and $\beta$ (originally in line 218) in Figure 6.
[W6-1] Impact of permutation on network capacity
* The permutation layers do not affect the network capacity. Instead, they facilitate exploration of the existing weight space (by enabling traversal from one weight point to another via permutation transforms). This largely enhances the network's adaptivity, as shown in the continual RL experiments.
[W6-2] Demonstration on non-RL tasks
* We have not yet conducted such experiments, but looking beyond RL tasks presents intriguing challenges. Take visual tasks as an example: neuroscience research [1] indicates that synapses in the visual cortex exhibit much higher stability. This could imply a need for new rewiring approaches with more moderate rewiring intensity.
[Q1] Detecting task changes
* This has been investigated in the field of task-free continual learning [2] where task identifiers are absent. For example, Rao et al. [3] suggested maintaining a buffer for poorly modeled samples and then expanding the model when the buffer reaches a critical size. Ardywibowo et al. [4] proposed to detect task changes using an energy-based novelty score.
References
[1] Grutzendler, J., Kasthuri, N., & Gan, W. B. (2002). Long-term dendritic spine stability in the adult cortex. *Nature*, *420*(6917), 812-816.
[2] Aljundi, R., Kelchtermans, K., & Tuytelaars, T. (2019). Task-free continual learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (pp. 11254-11263).
[3] Rao, D., Visin, F., Rusu, A., Pascanu, R., Teh, Y. W., & Hadsell, R. (2019). Continual unsupervised representation learning. *Advances in neural information processing systems*, *32*, 7647-7657.
[4] Ardywibowo, R., Huo, Z., Wang, Z., Mortazavi, B. J., Huang, S., & Qian, X. (2022). VariGrow: Variational architecture growing for task-agnostic continual learning based on Bayesian novelty. In *International Conference on Machine Learning* (pp. 865-877).
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: Thank you to the authors for their response and improvements to clarity. I maintain my current score and recommendation of acceptance. | Summary: The authors propose a new architectural method for continual reinforcement learning. The method relies on training not just the weights of the network, but also the arrangement of the neurons in each layer (implemented as permutation vectors based on a learned score vector for each layer of neurons).
The method continually adjusts previously saved permutations/rewirings, by ensuring that they would produce similar outputs when applied to the new weights, despite constant changes in the actual weights (IIUC).
In its current form, the method requires task IDs, but no replay buffers.
Various experiments suggest that the method is competitive with other established methods with similar requirements, despite its conceptual simplicity.
Strengths: - The method is novel (to my knowledge) and elegant.
- The results seem interesting.
- The exposition is (mostly) clear
Weaknesses: I don't see any major weakness, except for the limitations noted by the authors in the Conclusion. I would appreciate some clarifications, as detailed below.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Eq. 3: While I understand NumPy notation, some readers might not. Please explain it a bit more.
- Eq. 4: This seems to be the most important equation, but it is difficult to understand. What is the d() function? Apparently it is not the dimensionality of the layers, which is also (confusingly) called d in the next paragraph? Please clarify this.
- Figure 3 shows the divergence and convergence between the various policies, but not their performance. It would be useful to have an additional curve indicating (say) median performance.
- Section 3.4: IIUC, the method here is to continuously adjust the previously found wirings P(t-k), so that they will produce similar outputs as they did with their original weights w(t-k), when applied to the new weights w(t), despite the fact that the weights w(t) keep changing (if so, a one-sentence explanation of this type would be helpful).
- It seems that you are learning W(t) and P’/P’’ together, in parallel, through the loss in equation 11. An alternative would be to simply learn the new W(t) in isolation, then afterwards compute the P’/P’’ “offline”, by applying Eq 11 with frozen W(t) (since Eq. 11 doesn’t seem to require any interaction with the environment). Why not do the latter?
- Can you please confirm whether you simply train one new rewiring (or more precisely one new P’/P’’ pair) per task? That is, does the ’t’ index on Pt/Wt correspond to the tasks? Or is there something else involved?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors note some limitations of their approach in the Conclusion. These do not seem deal-breaking to me, since many of these limitations are shared by other approaches requiring task IDs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for providing valuable feedback. Our responses to your questions are below.
[Q1] NumPy notation in Eq. 3
* Thanks for pointing out the NumPy notation in $\boldsymbol{I}[\boldsymbol{z}_l,:]$. It rearranges the rows of the identity matrix $\boldsymbol{I}$ according to the indices $\boldsymbol{z}_l$, resulting in a permutation matrix. For example, consider $\boldsymbol{v}_l=\begin{pmatrix}0&4&2\end{pmatrix}^\top$ and $\boldsymbol{z}_l=\mathrm{argsort}(\boldsymbol{v}_l)=\begin{pmatrix}0&2&1\end{pmatrix}^\top$, then we have:
$$
\begin{aligned}
\boldsymbol{P}_l&=\boldsymbol{I}[\boldsymbol{z}_l,:]\\\\
&=\begin{pmatrix}1&0&0\\\\0&0&1\\\\0&1&0\end{pmatrix}.
\end{aligned}
$$
[Q2] The $d()$ function in Eq. 4
* $d()$ is a differentiable almost everywhere, semi-metric function (such as $L_1$ distance, $L_2$ distance) applied pointwise [1]. In this paper we use the $L_1$ distance. Below is a demonstration following the previous example, with $\tau=1$:
$$
\begin{aligned}
\hat{\boldsymbol{P}}_l&=\mathrm{softmax}\left(\frac{-d(\mathrm{sort}(\boldsymbol{v}_l)\boldsymbol{1}^\top,\boldsymbol{1}\boldsymbol{v}_l^\top)}{\tau}\right)\\\\
&=\mathrm{softmax}\left(-d(\begin{pmatrix}0&0&0\\\\2&2&2\\\\4&4&4\end{pmatrix},\begin{pmatrix}0&4&2\\\\0&4&2\\\\0&4&2\end{pmatrix})\right)\\\\
&=\mathrm{softmax}\begin{pmatrix}0&-4&-2\\\\-2&-2&0\\\\-4&0&-2\end{pmatrix}\\\\
&\approx\begin{pmatrix}0.867&0.016&0.117\\\\0.107&0.107&0.787\\\\0.016&0.867&0.117\end{pmatrix}.
\end{aligned}
$$
The result is a continuous relaxation of the permutation matrix $\boldsymbol{P}_l$, which allows end-to-end learning.
* As for the naming confusion, we will rename the dimensionality of the layers to $n$ to avoid any ambiguity.
[Q3] Performance of multi-mode policies
* Following your suggestion, we've included performance curves in the global response (Figure 2). Figure 2a demonstrates that, despite the slower initial learning of multi-mode policies from policy divergence, they ultimately outperform upon convergence. Figure 2b shows that incorporating explicit convergence guidance improves both mean and median rewards, verifying the impact of convergence on performance.
[Q4] Interpretation of Section 3.4
* Your understanding is correct. Our method continuously adjusts the cached wirings to counteract weight changes, thereby improving stability. We will add a brief explanation like this in Section 3.4 for clarification.
[Q5] Computing $\boldsymbol{P}'$/$\boldsymbol{P}''$ afterwards
* In this case, the learning objective for $\boldsymbol{W}^t$ (Eq. 11) is computed using outdated $\boldsymbol{P}'$/$\boldsymbol{P}''$ (from the last round). Therefore, its performance is likely to be inferior to joint learning, for which we stayed with the original implementation.
[Q6] Training details
* We train one new wiring $\boldsymbol{P}^t$ and one new $\boldsymbol{P}'$/$\boldsymbol{P}''$ pair per task, where the index $t$ on $\boldsymbol{P}^t$ represents the current task. The rest of the wirings ($\boldsymbol{P}^{t-k},k>0$) remain frozen and are only updated at the end of each task, so no additional training cost is involved.
Reference:
[1] Prillo, S., & Eisenschlos, J. (2020). SoftSort: A continuous relaxation for the argsort operator. In *International Conference on Machine Learning* (pp. 7793-7802).
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I appreciate the author's clarifications in response to this and other reviews. I maintain my recommendation for acceptance. | Summary: The paper proposes a new method for continual reinforcement learning. The idea is to leverage a neuron rewiring mechanism implemented as additional permutations of neurons between the NN weights. In the proposed algorithm, several such permutation sets are maintained, which correspond to different policies that can be used. Having those multiple policies helps to maintain exploration. As a regularization mechanism, a variant of L2 is used for combined NN weight matrices and permutation matrices. Experimental evaluation shows the strong performance of the proposed method, which often matches or outperforms baseline approaches while limiting parameter overhead.
Strengths: [S1] Presented solution is quite simple and elegant. Neuron permutations provide both an effective parametrization of the NN and the mechanism for having cheap ensembles.
[S2] Experimental results are compelling, the presented approach is demonstrated to match or beat previous approaches and limit the parameter overhead.
[S3] Ablation experiments and analyses are provided, providing further insights into the method.
Weaknesses: [W1] Although the paper is mostly easy to read, some sections would benefit from editing. Especially Section 3.4 seems wordy and sometimes contains unnecessary or misleading statements, e.g. line 195: “In this situation, one may interpret…” - I have to say I don’t understand this sentence. See also the Questions section of the review for more (mostly minor) suggestions.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: [Q1] Are rewiring mechanisms used for actor only, or both actor and critic? Have you tried different options here?
[Q2] Suggestion for an additional experiment (optional): one can see your approach as a way of maintaining a cheap set of ensemble models, each starting from one base model and modifying it with some small adapting module. It would be interesting to compare performance to similar solutions (not previously used in continual RL AFAIK), e.g. https://arxiv.org/abs/2002.06715
Some additional (minor) suggestions:
[Q3] Figure 5 - it is not super clear what is “Rewire” here.
[Q4] Figure 4 - not very easy to read (e.g. because of 4 different triangle shapes) - maybe colors would be easier to read?
[Q5] - “Validation step” label in some of the figures - a nicer alternative would be something more interpretable like training steps.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: IMO authors sufficiently covered the limitations in a dedicated paragraph in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your valuable suggestions, and we would like to address your main concerns as follows:
[W1] Misleading statements in Section 3.4
* We apologize for any confusion around Section 3.4. Specifically, in line 195, our aim was to present a new interpretation of catastrophic forgetting: Instead of framing it as a shift in weight space (as in EWC, SI [1], etc.), we view it as the misalignment between updated weights and fixed wiring. This perspective led to our proposal to align the wiring with the weights. We will further revise this section to improve its clarity and conciseness.
[Q1] Rewiring details
* The rewiring mechanism is only used for the actor. We have not rewired the critics because many baselines (such as PackNet and PNN) in Brax [2] and Continual World [3] default to standard critics. Rewiring the twin critics would impose significantly higher computational costs than those baselines.
[Q2] Comparison with ensemble methods
* Following your suggestion, we implement BatchEnsemble [4] in continual RL, which learns rank-one "fast weights" for adapting to each new task. The results on the HalfCheetah/forgetting scenario are as follows:
| | H/F performance | 95% confidence interval | Model size |
| ------------- | --------------- | ------------ | ---------- |
| BatchEnsemble | 0.94 $\pm$ 0.23 | [0.81, 1.08] | 1.1 |
| Ours | 1.31 $\pm$ 0.21 | [1.11, 1.40] | 2.1 |
While BatchEnsemble is very efficient, its performance is limited by the expressiveness of rank-one matrices (the main weights are frozen during learning). Our method, on the other hand, jointly learns the weights and the wirings under an alignment scheme (Eq. 11), thus achieving better results.
[Q3] "Rewire" in Figure 5
* "Rewire" represents rewiring (Section 3.2) but without the multi-mode strategy (Section 3.3). We have revised the figure caption to provide a brief explanation. Please refer to Figure 2 in the global response.
[Q4] Readability of markers in Figure 4
* Thank you for the valuable suggestion. We've now added colors to differentiate the various triangle markers, and the updated version is included as Figure 1a in the global response.
[Q5] More interpretable x label in Figures 3 and 5
* We have replaced "Validation step" with "Step of interaction" in the global response. The "training steps" mentioned in the reviewing comment can be easily inferred from this. For example, on the HalfCheetah/forgetting scenario, the actor is updated every two steps of interaction, while the critics are updated every step.
References
[1] Zenke, F., Poole, B., & Ganguli, S. (2017). Continual learning through synaptic intelligence. In *International conference on machine learning* (pp. 3987-3995).
[2] Gaya, J. B., Doan, T., Caccia, L., Soulier, L., Denoyer, L., & Raileanu, R. (2023). Building a subspace of policies for scalable continual learning. In *International Conference on Learning Representations*.
[3] Wołczyk, M., Zając, M., Pascanu, R., Kuciński, Ł., & Miłoś, P. (2021). Continual world: A robotic benchmark for continual reinforcement learning. *Advances in Neural Information Processing Systems*, *34*, 28496-28510.
[4] Wen, Y., Tran, D., & Ba, J. (2020). BatchEnsemble: an alternative approach to efficient ensemble and lifelong learning. In *International Conference on Learning Representations*.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed answers to my review. I choose to keep my original score, as it still reflects my evaluation of the work. | Rebuttal 1:
Rebuttal: We thank all reviewers for the insightful comments, which are important for improving our work. Alongside our individual responses, we have meticulously prepared a PDF file containing figures that effectively address numerous frequently raised concerns. Below is a concise summary of these figures.
* Figure 1: Performance-size tradeoff. It includes additional comparisons with CSP (Reviewer NgVj) from 10 runs (Reviewer CtSV) and improved readability (Reviewer cvgK).
* Figure 2: Evolution of performance for multi-mode policies (Reviewers AnPE and NgVj), featuring 95% confidence interval (Reviewer CtSV) and median (Reviewer AnPE). It also uses more interpretable x labels and a revised caption (Reviewer cvgK).
Pdf: /pdf/8acb457cb465749ca42be2a4fb4c25b8ef7c7941.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work studies the continual reinforcement learning setting. This paper proposes a method to permute the neurons in the network, and these permutations allow the exploration of a large part of the weight space. The proposed method caches weights from the prior tasks, which helps to mitigate forgetting. An additional rewiring strategy is proposed to encourage exploration when the task changes.
Strengths: The proposed method is memory efficient, which is essential for lifelong learning systems as they might encounter an arbitrarily long sequence of tasks. The memory complexity is only O(k*d), where d is the number of neurons in the network and k is the number of tasks.
I like the focus on ensuring continual exploration in new environments. This issue is generally overlooked in the literature, and the emphasis on continual exploration strengthens this work.
The empirical evaluation is performed on a wide range of environments. This evaluation can provide a detailed assessment of the proposed method.
Weaknesses: Although the ideas presented in this paper are interesting, the empirical evaluation could be more rigorous.
All the experiments are performed with just three random seeds (line 259), meaning the results presented in the paper are not statistically significant.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: The paper only considers a setting where the agent cannot access replay buffers from the previous tasks. But this seems like an arbitrary choice. If the agent has a replay, why can it not store experience from previous tasks? Can you please elaborate on the rationale for not allowing methods to store experience from previous tasks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 3 good
Limitations: The empirical evaluation is this work is weak. I recommend a new paper by Patterson et al. (2023) to understand how to perform proper empirical analysis in RL.
I will consider raising my score by 3 points or more if the authors improve the empirical evaluation and show that the current results hold with more runs.
Specifically, I want the authors to perform at least ten runs (30 would be ideal) for all experiments and show the 95% bootstrapped confidence interval.
Patterson, A., Neumann, S., White, M., & White, A. (2023). Empirical Design in Reinforcement Learning. arXiv preprint arXiv:2304.01315.
EDIT
I have updated by score based on the new results shared the authors
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the very helpful comments. Here are our responses to the concerns raised.
[W1] Evaluation could be more rigorous
* Having reading the paper by Patterson et al. [1], we wholeheartedly concur that our work stands to gain from a more rigorous evaluation. To this end, during the rebuttal period we have exhausted all GPU resources that we can assess to run the full method on all HalfCheetah and Ant scenarios with 10 seeds. This process has consumed over 3 GPU-months. We assure that the remaining experiments will be performed soon and included in the revised paper. In the interim, we present the current results, accompanied by a 95% bootstrap confidence interval (around the mean), as following.
| | HalfCheetah performance | 95% confidence interval | Ant performance | 95% confidence interval |
| ---------------- | ----------------------- | ------------ | --------------- | ------------ |
| Forgetting | 1.31 $\pm$ 0.21 | [1.11, 1.40] | 1.46 $\pm$ 0.15 | [1.36, 1.55] |
| Transfer | 1.42 $\pm$ 0.19 | [1.29, 1.52] | 0.76 $\pm$ 0.07 | [0.71, 0.79] |
| Robustness | 1.07 $\pm$ 0.12 | [0.98, 1.13] | 0.73 $\pm$ 0.11 | [0.68, 0.81] |
| Compositionality | 0.88 $\pm$ 0.09 | [0.81, 0.92] | 1.95 $\pm$ 0.11 | [1.87, 2.00] |
| Aggregate | 1.17 $\pm$ 0.15 | [1.04, 1.24] | 1.22 $\pm$ 0.11 | [1.15, 1.29] |
And below is the updated comparison table with some baselines in [2]. It can be seen that our method remains competitive even with more training runs.
| | HalfCheetah performance | Model size | Ant performance | Model size |
| ------- | ----------------------- | ---------- | --------------- | ---------- |
| PackNet | 0.85 $\pm$ 0.14 | 2.0 | 1.08 $\pm$ 0.21 | 2.0 |
| PNN | 1.03 $\pm$ 0.14 | 8.0 | 0.98 $\pm$ 0.31 | 8.0 |
| FT-N | 1.16 $\pm$ 0.20 | 8.0 | 0.97 $\pm$ 0.20 | 8.0 |
| CSP | 1.27 $\pm$ 0.27 | 5.4 | 1.11 $\pm$ 0.17 | 3.9 |
| Ours | 1.17 $\pm$ 0.15 | 2.1 | 1.22 $\pm$ 0.11 | 2.1 |
* In order to enhance the evaluation process, we have made the following efforts: (1) In our rebuttal, the majority of comparative results are accompanied by a 95% bootstrap confidence interval (around the mean) derived from 10 individual runs. (2) The curve plots presented in our work feature both a 95% confidence interval and the median for a comprehensive analysis.
[Q1] Why assume no access to previous replay buffers
* The primary concern is memory usage, as highlighted in recent literature [2,3]. Gaya et al. [2] reported that for HalfCheetah, the replay buffer used by all methods requires around 1GB per task, while for Humanoid it takes around 15GB per task. Storing experience from previous tasks would result in a large memory overhead, which is undesirable in memory-sensitive situations. Therefore, we adopt the setting without access to previous replay buffers, which also allows direct comparisons with the baselines in [2].
References
[1] Patterson, A., Neumann, S., White, M., & White, A. (2023). Empirical design in reinforcement learning. *arXiv preprint arXiv:2304.01315*.
[2] Gaya, J. B., Doan, T., Caccia, L., Soulier, L., Denoyer, L., & Raileanu, R. (2023). Building a subspace of policies for scalable continual learning. In *International Conference on Learning Representations*.
[3] Khetarpal, K., Riemer, M., Rish, I., & Precup, D. (2022). Towards continual reinforcement learning: A review and perspectives. *Journal of Artificial Intelligence Research*, *75*, 1401-1476.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for performing more runs in a short amount of time. I understand how much effort and computation it takes to do so many runs quickly. The new results in Tables 1 and 2 in the rebuttal alleviate my concerns about the statistical significance of the results. These new results with ten seeds align with those reported in the original submission. Based on these new results, I have updated my score to accept the paper. I look forward to the authors adding results with more seeds for the remaining experiments in the revised paper. | null | null | null | null | null | null |
Joint Bayesian Inference of Graphical Structure and Parameters with a Single Generative Flow Network | Accept (poster) | Summary: In a recent line of research, Generative Flow Networks (GNFs) are used for structure learning by drawing DAGs (that represent BN structures) from an implicit DAG distribution that fits the data (Deleu et al 2022). The contribution of the present paper is extending this line of research by drawing joint DAG and the parameters (of the factors associated with the DAG's nodes).
To be more precise, firstly they construct a graph progressively, by adding new links between nodes (as in Deleu et al 2022 paper), and then the parameters are chosen to form a pair of (structure, parameter).
Strengths: 1. The problem that this paper addresses i.e. "structure learning" is an important task.
2. The paper is overall well written and the literature review is thorough.
3. The reported results are good.
4. This is an interesting paper but unfortunately, I believe its theoretical flaws are serious and have to be addressed before publishing the work.
Weaknesses: 1. The core theory behind this work, i.e. equation (6), seems fundamentally incorrect and has measure theoretic problems:
Detailed balance is sufficient for "fixed point condition" --which in this case means, the convergence of the probability of the terminal structure/parameters (G, theta) to a posterior probability that is proportional to its reward R(G, theta)-- only if the dimensionality is fixed. When the sub-spaces have different dimensions, measure-theoretic problems rise due to mapping a single point (in a low-dimensional subspace) to infinite points (in a high-dimensional subspace).
The authors correctly mention (in line 242) that MCMC encounters trans-dimensional move problems.
However, this (as discussed in Green's Reversible Jump MCMC paper) is due to the detailed balance condition which is the foundation of the present work as well. As such, as far as I can say, the same issue should be addressed here too otherwise the correct convergence is not guaranteed.
2. The code is not provided. As such, the reported results are not reproducable.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I would appreciate it if the authors could comment on the mentioned measure theoretic problem.
2. In lines 176-179, there is a confusing statement that I interpret as follows: "Although there is no guarantee that equation (6) guarantees the convergence but if we assume that (6) holds for all pairs (which I suppose, means if we assume that the dimensionality is fixed which is not (?)) then the convergence is guaranteed."
Could you please clarify these lines?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their comments; we believe that checking the soundness of theoretical results when available is an integral part of the reviewing process, and we appreciate it that you took the time to do so. **However, we want to draw the attention of all reviewers and the Area Chairs to the fact that there is no theoretical flaw in our submission, including in the context raised by Reviewer 9gh3.**
The complete measure-theoretical treatment of continuous GFlowNets can be found in (Lahlou et al., 2023), but we provide here some additional details in the specific context of this submission. For some DAG $G$, let $\Theta_{G}$ denote the measurable space of parameters $\theta$ associated with $G$ (with its Borel $\sigma$-algebra, which we omit here for clarity). **Note that the spaces $\Theta_{G}$ do not need to be the same for different graphs, and they may have different dimensionalities.** From this perspective, the reward $R(G, d\theta)$ and the forward transition probability $P_{\phi}(d\theta\mid G)$ are now **measures** over $\Theta_{G}$ (using infinitesimal notations).
For any transition $G \rightarrow G'$ in the GFlowNet (i.e., $G'$ is the result of adding a single edge to $G$), Eq 6 of the submission represents **an equality between measures over the product space $\Theta_{G}\times \Theta_{G'}$**. Putting it differently, for any bounded measurable function $h: \Theta_{G}\times \Theta_{G'}\rightarrow \mathbb{R}$, the condition reads:
$$
\iint_{\Theta_{G}\times \Theta_{G'}}h(\theta, \theta')R(G', d\theta')P_{B}(G\mid G')P_{\phi}(d\theta\mid G) = \iint_{\Theta_{G}\times \Theta_{G'}}h(\theta, \theta')R(G, d\theta)P_{\phi}(G'\mid G)P_{\phi}(d\theta'\mid G')
$$
We would like to emphasize once again that this condition is valid even when $\Theta_{G}$ and $\Theta_{G'}$ do not have the same dimensionality, since this is an equality between measures over the product space (not either individual space $\Theta$). In (Lahlou et al., 2023), it was shown how similar conditions can be translated into constraints in terms of probability **densities**.
One mild (implicit) assumption we make in this submission is that for any DAG $G$, there exists a (common base) measure $\mu_{G}$ over $\Theta_{G}$ such that $R(G, d\theta) \ll \mu_{G}(d\theta)$ and $P_{\phi}(d\theta\mid G) \ll \mu_{G}(d\theta)$ are both absolutely continuous wrt. $\mu_{G}$. This assumption is indeed mild because as mentioned in Section 3.4 of our submission, $P_{\phi}(d\theta\mid G)$ approximates the posterior distribution $P(d\theta\mid G, \mathcal{D})$ (up to a multiplicative constant), and $R(G, d\theta) = P(\mathcal{D}, G, d\theta)$. Using the notations $R(G, \theta)$ and $P_{\phi}(\theta\mid G)$ to denote their Radon-Nikodym derivatives wrt. $\mu_{G}$, Eq 6 corresponds to the equality between measures above, written as an equality in terms of their Radon-Nikodym derivatives:
$$
R(G', \theta')P_{B}(G\mid G')P_{\phi}(\theta\mid G) = R(G, \theta)P_{\phi}(G'\mid G)P_{\phi}(\theta'\mid G')
$$
All the results in the submission hold if we explicitly work with measures instead of their derivatives, regardless of the dimensionalities of $\Theta$. In particular Thm 3.1 of our submission (satisfying these conditions implies that the GFlowNet induces a distribution $\propto R$) still holds true. We provide a detailed proof from a measure theoretical perspective in a separate comment below for completeness.
Unlike the detailed balance condition in MCMC, Eq 6 (which is *not* the detailed balance condition for Markov chains) does not define a fixed-point condition for an invariant distribution to match some target distribution (here, the joint posterior). The objective of the GFlowNet is to find a forward transition probability $P_{\phi}$ such that its marginal distribution over complete states (called the terminating state probability in line 186) matches the joint posterior, via satisfying Eq 6. $P_{\phi}$ itself is not the invariant distribution that would arise from a fixed-point condition like the detailed balance.
The key difference with RJ-MCMC is that in the GFlowNet presented here, we never have to make trans-dimensional moves: once the first phase ends (Sec 3.1), we commit to a specific graph $G$, and therefore we can sample from a distribution over $\Theta_{G}$; then once this is done (end of the second phase), we can start over from the initial state $(G_{0}, \cdot)$. In other words, we never make any move from one space $\Theta_{G}$ (or more precisely $(\{G\}\times \Theta_{G})$) to another $\Theta_{G'}$ with possibly a different dimensionality, unlike in RJ-MCMC. That’s why there is no need for an invertible transformation across dimensions in our Eq 6.
> *In lines 176-179, there is a confusing statement […] Could you please clarify these lines?*
>
We see where this confusion comes from, and we apologize for it. “The SubTB conditions” in line 176 refers to generic subtrajectory balance conditions, as described in App C.2, and not the ones specifically for our setting in Eq 6. What we meant was that satisfying the subtrajectory balance conditions of the form in Eq C.3 do not guarantee (in general) that the GFlowNet induces a distribution proportional to $R$. However, satisfying the SubTB conditions for the specific subtrajectories of the form $(G, \theta) \leftarrow (G, \cdot) \rightarrow (G', \cdot) \rightarrow (G', \theta')$ considered in Sec 3.2, corresponding to the conditions in Eq 6, do guarantee this result (Thm 3.1). We propose to replace lines 176-179 by:
*Although there is no guarantee in general that satisfying **the SubTB conditions in (C.3) for arbitrary subtrajectories** would yield…*
We hope that this clarified the misunderstanding about our theoretical results, and we strongly encourage the reviewer to update their score since this was the only major source of confusion.
---
(Lahlou et al., 2023) Salem Lahlou, et al. A Theory of Continuous Generative Flow Networks. ICML 2023.
---
Rebuttal Comment 1.1:
Title: Proof of Theorem 3.1 from a measure theoretical perspective
Comment: *(In this comment, we give some details on how to prove Theorem 3.1 of our submission from a purely measure theoretical perspective, using the condition above)*
The joint distribution over graphs & parameters is defined over the measurable union space $\mathcal{X} \triangleq \bigcup_{G\in\mathbf{G}}(\{G\}\times \Theta_{G})$, where $\mathbf{G}$ represents the (finite) set of all DAGs over $d$ nodes. We first can prove that if we have transitions $G \rightarrow G' \rightarrow G''$ in the GFlowNet (i.e., $G'$ is the result of adding one edge to $G$, and $G''$ is the result of adding one edge to $G'$), then for any bounded measurable function $h: \Theta_{G}\times \Theta_{G''}\rightarrow \mathbb{R}$, we have:
$$
\iint_{\Theta_{G}\times \Theta_{G''}}h(\theta, \theta'')R(G'', d\theta'')P_{B}(G\mid G')P_{B}(G'\mid G'')P_{\phi}(d\theta\mid G) = \iint_{\Theta_{G}\times \Theta_{G''}}h(\theta, \theta'')R(G, d\theta)P_{\phi}(G''\mid G')P_{\phi}(G'\mid G)P_{\phi}(d\theta''\mid G'')
$$
We can use the SubTB condition above, observing that $h(\theta, \theta'')$ is a bounded measurable function of $\Theta_{G'}\times \Theta_{G''}$ (constant on $\Theta_{G'}$ for a fixed $\theta$), and using Fubini-Tonelli’s theorem.
$$\Bigg[\iint_{\Theta_{G}\times \Theta_{G''}} h(\theta, \theta'')R(G'', d\theta'')P_{B}(G\mid G')P_{B}(G'\mid G'')P_{\phi}(d\theta\mid G)\Bigg]\times\Bigg[\int_{\Theta_{G'}}P_{\phi}(d\theta'\mid G')\Bigg]$$
$$= \int_{\Theta_{G}}\Bigg[\iint_{\Theta_{G'}\times\Theta_{G''}}h(\theta, \theta'')R(G'', d\theta'')P_{B}(G'\mid G'')P_{\phi}(d\theta'\mid G')\Bigg]P_{B}(G\mid G')P_{\phi}(d\theta\mid G)$$
$$= \int_{\Theta_{G}}\Bigg[\iint_{\Theta_{G'}\times\Theta_{G''}}h(\theta, \theta'')R(G', d\theta')P_{\phi}(G''\mid G')P_{\phi}(d\theta''\mid G'')\Bigg]P_{B}(G\mid G')P_{\phi}(d\theta\mid G)$$
$$= \iint_{\Theta_{G}\times \Theta_{G'}}\Bigg[\int_{\Theta_{G''}}h(\theta, \theta'')P_{\phi}(G''\mid G')P_{\phi}(d\theta''\mid G'')\Bigg]R(G', d\theta')P_{B}(G\mid G')P_{\phi}(d\theta\mid G)$$
The quantity inside the brackets is a bounded measurable function of $\Theta_{G}\times \Theta_{G'}$. We can therefore apply the SubTB condition above.
$$= \iint_{\Theta_{G}\times\Theta_{G'}}\Bigg[\int_{\Theta_{G''}}h(\theta, \theta'')P_{\phi}(G''\mid G')P_{\phi}(d\theta''\mid G'')\Bigg]R(G, d\theta)P_{\phi}(G'\mid G)P_{\phi}(d\theta'\mid G')$$
$$= \Bigg[\iint_{\Theta_{G}\times\Theta_{G''}}h(\theta, \theta'')R(G, d\theta)P_{\phi}(G'\mid G)P_{\phi}(G''\mid G')P_{\phi}(d\theta''\mid G'')\Bigg]\times\Bigg[\int_{\Theta_{G'}}P_{\phi}(d\theta'\mid G')\Bigg]$$
Which proves the equality above on the product space $\Theta_{G}\times\Theta_{G''}$. By induction, we can also prove that for a partial trajectory $G_{0} \rightarrow G_{1}\rightarrow \ldots \rightarrow G_{T}$ in the GFlowNet, we have for any bounded measurable function $h: \Theta_{G_{0}}\times \Theta_{G_{T}}\rightarrow \mathbb{R}$:
$$
\iint_{\Theta_{G_{0}}\times\Theta_{G_{T}}}h(\theta_{0}, \theta_{T})R(G_{T}, d\theta_{T})\prod_{t=0}^{T-1}P_{B}(G_{t}\mid G_{t+1})P_{\phi}(d\theta_{0}\mid G_{0}) = \iint_{\Theta_{G_{0}}\times\Theta_{G_{T}}}h(\theta_{0}, \theta_{T})R(G_{0}, d\theta_{0})\prod_{t=0}^{T-1}P_{\phi}(G_{t+1}\mid G_{t})P_{\phi}(d\theta_{T}\mid G_{T})
$$
Theorem 3.1 of our submission can be rewritten in measure theoretical terms as
> If the SubTB conditions above are satisfied for all undirected paths of length 3 between
any $(G, \theta)$ and $G', \theta')$ of the form $(G, \theta) \leftarrow (G, \cdot) \rightarrow (G', \cdot) \rightarrow (G', \theta')$, then we have for any bounded measurable function $h: \Theta_{G}\rightarrow \mathbb{R}$
> $\displaystyle \int_{\Theta_{G}}h(\theta)P_{\phi}^{\top}(G, d\theta) \triangleq \int_{\Theta_{G}}h(\theta) \sum_{G_{0}\rightsquigarrow G}\prod_{t=0}^{T-1}P_{\phi}(G_{t+1}\mid G_{t})P_{\phi}(d\theta\mid G)\propto \int_{\Theta_{G}}h(\theta)R(G, d\theta)$
And we can prove this using the lemma above. Since any bounded measurable function $h:\Theta_{G}\rightarrow \mathbb{R}$ is also a measurable function over $\Theta_{G_{0}}\times \Theta_{G}$ (it is constant wrt. $\theta_{0}$), we can directly apply the lemma above (using the notation $G = G_{T}$)
$$\Bigg[\int_{\Theta_{G_{0}}}R(G_{0}, d\theta_{0})\Bigg]\times \Bigg[\int_{\Theta_{G}}h(\theta)P_{\phi}^{\top}(G,d\theta)\Bigg]$$
$$= \sum_{G_{0}\rightsquigarrow G}\iint_{\Theta_{G_{0}}\times\Theta_{G}}h(\theta)R(G_{0}, d\theta_{0})\prod_{t=0}^{T-1}P_{\phi}(G_{t+1}\mid G_{t})P_{\phi}(d\theta\mid G)$$
$$= \sum_{G_{0}\rightsquigarrow G}\iint_{\Theta_{G_{0}}\times\Theta_{G}}h(\theta)R(G, d\theta)\prod_{t=0}^{T-1}P_{B}(G_{t}\mid G_{t+1})P_{\phi}(d\theta_{0}\mid G_{0})$$
$$= \Bigg[\int_{\Theta_{G_{0}}}P_{\phi}(d\theta_{0}\mid G_{0})\Bigg]\times \Bigg[\int_{\Theta_{G}}h(\theta)R(G, d\theta)\sum_{G_{0}\rightsquigarrow G}\prod_{t=0}^{T-1}P_{B}(G_{t}\mid G_{t+1})\Bigg]$$
$$= \int_{\Theta_{G}}h(\theta)R(G, d\theta)$$
---
Rebuttal Comment 1.2:
Title: Reviewer 9gh3: Reply to Potential Theoretical Flaw
Comment: Dear Reviewer 9gh3,
The authors have replied to your main concern regarding a potential theoretical flaw (with measure-theoretic and trans-dimensional issues). Have their reply and their additional comments addressed this concern? Do these change your assessment of the paper? | Summary: The authors present a method for learning a posterior distribution over Bayesian networks using an extended version of GFlowNets that can jointly construct the underlying DAG and associated parameters in a two-stage process. They use an objective based on a generalization of detailed balance to train their flow network and provide a theoretical justification that optimizing this generalized objective leads to the desired distribution.
The method is evaluated on synthetic problems and gene regulatory networks, outperforming recent methods based on MCMC.
Strengths: The problem of learning Bayesian networks is relevant to the community. The authors provide a deep theoretical framework for the presented problem and derive it thoroughly. The strength of the method is that it provides not just a point estimate, but a full posterior distribution over Bayesian networks and parameters in an amortized fashion.
Weaknesses: - While the theoretical framework is explained in much detail, the actual implementation is quite opaque to me. The authors could spend more time in explaining the architecture and implementation details.
- “You are strongly encouraged to submit code that produces this result.” [NeurIPS guidelines].
- A simple educational example that is dissected in detail could help to better understand the framework.
- The experimental results are restricted to some performance metrics, but especially in the biological datasets, it would be interesting to dig deeper. Could JSP-GFN discover known gene regulatory networks? How do posterior predictives look compared to the real data? What role does interventional vs. observational data play?
- The experiments provide little intuition about the regime on which this method can be expected to work in terms of the number of training data, the number of nodes, and the number of parameters per node.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Fig. D.3c: is this the negative LL (lower=better) or just LL? Depending on the answer the interpretation of the results would change.
- What is the relationship between accuracy vs. number of training data?
- How does the method scale with the number of nodes? Can the authors provide some intuition about the size of Bayesian networks one might be able to learn with this method, also in relation to the number of training data?
- Why is the additional use of a graph neural network necessary compared to the architecture used in DAG-GFlowNet?
- The authors discuss how $P_{\phi}(\theta_i|G,stop)$ can be parameterized by a Normal/diagonal-Normal distribution, or a 2-layer MLP for each $\theta_i$. How do the authors ensure that correlations between $\theta$s of different $G_i$ can be learned if different MLPs are used for each $\theta_i$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors only touch on some limitations and a more detailed discussion would strengthen the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their detailed comments. Due to limited space, we focus our responses to the Questions, and defer the responses to other points raised in the review in a separate comment below.
> *Fig. D.3c: is this the negative LL (lower=better) or just LL? Depending on the answer the interpretation of the results would change.*
This is indeed a typo, and we would like to thank you for pointing this out. The results reported in Figure D.3c are the log-likelihoods. We will update the values in the table to report the negative log-likelihoods for consistency with the rest of the paper.
> *What is the relationship between accuracy vs. number of training data?*
In terms of accuracy of the posterior approximation, we expect JSP-GFN (or any other Bayesian structure learning method) to provide a more accurate approximation of the posterior when the dataset is small. When the dataset is larger, there are some challenges that arise in JSP-GFN, which we detail below.
> *How does the method scale with the number of nodes? Can the authors provide some intuition about the size of Bayesian networks one might be able to learn with this method, also in relation to the number of training data?*
One limitation for scaling in terms of number of nodes is the size of the neural network that is used to parametrize the forward transition probability: larger graphs mean larger inputs and therefore it is more expensive from a computational perspective. That being said, using a graph network instead of a transformer as in DAG-GFlowNet offers an advantage here (see below).
Another limitation when it comes to scaling, which is beyond the scope of this paper, is the problem of effective exploration when the state space becomes so large (recall that it is super exponential in $d$). For the experiments we ran in this paper (up to $d=61$ nodes), we found that a naive exploration strategy as described in the general comment (**Practical implementation**) was sufficient. However, we envision that this could become a bottleneck for much larger graphs, where the state space becomes too large. Since this is a challenge that arises from GFlowNets themselves, this is an issue that is better studied in isolation and not in the narrow context of Bayesian structure learning (e.g., inspiration can be taken from the RL literature).
But from our experiments, we can confidently attest that JSP-GFN is an effective method to approximate the posterior distribution for up to $d=50$ nodes (which is typically the scale attained by prior works), and maybe pushing it to $d=100$ nodes.
In terms of the size of the dataset, JSP-GFN shares some limitations that were discussed in DAG-GFlowNet: when the dataset becomes large, the posterior distribution becomes more concentrated around a few graphs (and parameters). Therefore, the neural network that parametrizes the forward transition probability must be able to handle large variations in outputs, which can be challenging. This comes from the fact that Equation 8 corresponds (roughly speaking) to a regression problem where the target is the delta score $\log R(G', \theta') - \log R(G, \theta)$: if the posterior is very “peaky”, then the delta-score may take values with large magnitude (positive or negative, it scales roughly with the size of $\mathcal{D}$). One standard practice in machine learning would be to normalize the target of the regression problem, but this is impossible here since this would change the nature of the distribution we are trying to approximate. One solution suggested by the authors of DAG-GFlowNet, which could be applied to JSP-GFN too, would be to take inspiration of simulated annealing and introduce a temperature parameter in the posterior distribution that will decrease over training.
In practice, JSP-GFN can handle datasets up to a few thousands examples (in our experiments, we had up to $N=4200$ in the flow cytometry data experiment).
> *Why is the additional use of a graph neural network necessary compared to the architecture used in DAG-GFlowNet?*
The graph network is not necessary, and we could have used a similar transformer architecture as the authors of DAG-GFlowNet. Our choice was motivated by the fact that the graph network was a much simpler architecture, specifically created for graph inputs, without compromising in terms of performance. The transformer used in DAG-GFlowNet is bound to have a fixed input size of $d^2$, regardless of the number of edges in the input graph; in contrast, the input graphs in the graph networks will have at most $d(d-1)/2$ edges, and will often have much fewer edges in practice. This allows us to better scale in terms of the size of graphs, as opposed to using a transformer.
> *The authors discuss how $P_{\phi}(\theta_{i} \mid G, stop)$ can be parameterized by a Normal/diagonal-Normal distribution, or a 2-layer MLP for each $\theta_{i}$. How do the authors ensure that correlations between $\theta$s of different $G_{i}$ can be learned if different MLPs are used for each $\theta_{i}$?*
We (implicitly) assume in this paper that the priors over graphs $P(G)$ and over parameters $P(\theta\mid G)$ are modular, which is a standard assumption in the structure learning literature, meaning that
$$P(G) = \prod_{i=1}^{d}P(G_{i}) \qquad \mathrm{and} \qquad P(\theta\mid G) = \prod_{i=1}^{d}P(\theta_{i}\mid G_{i})$$
In practice, we used a uniform prior over graphs, and a unit Gaussian prior over parameters (e.g., lines 783-787). Under modularity assumption, it’s easy to see that the posterior over parameters also decomposes as $P(\theta\mid G, \mathcal{D}) = \prod_{i}P(\theta_{i}\mid G_{i}, \mathcal{D})$. This is in fact at the core of our proof in Appendix D.6.1. Since $P_{\phi}(\theta_{i}\mid G, stop)$ effectively approximates the posterior over parameters (as mentioned in lines 231-232), there is no need to model correlations between $\theta$s, since they do not exist in the true posterior.
---
Rebuttal Comment 1.1:
Title: Additional responses
Comment: *(This comment contains additional responses to the review, due to the limited space)*
> *While the theoretical framework is explained in much detail, the actual implementation is quite opaque to me. The authors could spend more time in explaining the architecture and implementation details.*
>
We invite you to read the general comment (**Practical implementation**). For the details of the neural network architecture, we could add some additional details in the appendix too, but we believe that they would not contribute to the overall description in the main text.
> *A simple educational example that is dissected in detail could help to better understand the framework.*
>
We were hoping that the illustration in Figure 1 (along with the caption, and the main text in Section 3.1) would be sufficient for understanding how sampling from the GFlowNet works; we plan to add pseudo-code to describe the training process of JSP-GFN (see also the general comment, **Practical implementation**).
A more detailed example would require either (1) specifying the whole state space, which could be too cluttered, or (2) specify real values of the forward transition probability $P_{\phi}$ to illustrate the process, both of which would be challenging to illustrate easily. However we are open to any suggestion to make this part as clear as possible.
> *“You are strongly encouraged to submit code that produces this result.” [NeurIPS guidelines].*
>
We invite you to read the general comment (**Code**).
> *The experimental results are restricted to some performance metrics, but especially in the biological datasets, it would be interesting to dig deeper. Could JSP-GFN discover known gene regulatory networks? How do posterior predictives look compared to the real data? What role does interventional vs. observational data play?*
>
We completely agree with you, and an analysis of the networks found by JSP-GFN against known regulation pathways would be extremely interesting. We are using the negative log-likelihood on held-out interventions to follow the evaluation of (Sethuraman et al., 2023), which introduced the dataset we are using here (specifically, the subset of genes). We would be cautious with interpreting the gene regulatory networks found by our method since they assume that the graphs are acyclic, thus ignoring potential feedbacks existing in real biological systems; see also Appendix B.2 for a discussion of these limitations for biological data.
> *The authors only touch on some limitations and a more detailed discussion would strengthen the paper.*
>
We can suggest adding the limitations highlighted above in terms of challenges for scaling the size of the graphs and size of datasets, to complement the limitations already mentioned in Appendix B.2.
---
(Sethuraman et al., 2023) Muralikrishnna G. Sethuraman, Romain Lopez, Rahul Mohan, Faramarz Fekri, Tommaso Biancalani, Jan-Christian Hütter. NODAGS-Flow: Nonlinear Cyclic Causal Structure Learning. AISTATS 2023. | Summary: The authors utilize a generative flow network to jointly model the structure of the DAQ and its parameters.
Strengths: The authors utilize GFlowNets to jointly model the structure of the DAQ and its parameters. The experiments on simulated data and real-world data are promising.
Weaknesses: I am not an expert on this area and I did not bid this paper. So I will listen to other reviewers' opinion on weakness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors did not discuss this which I think they should.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for taking the time to read our submission, despite it not being in their area of expertise.
The limitations of our work, along with broader impacts, are discussed in Appendix B. | Summary: This paper presents a new method for Bayesian structure learning from observational data, based on the framework of GFlowNets.
GFlowNets have been used for Bayesian structure learning, but previous approaches view the distributions over the parameters and structures of a model modular and may learn them with separate methods. The paper introduces a joint method that learns them with a single GFlowNet. To do so, the paper extends the previous approach (Deleu et al., 2022) by introducing a new reward function, deriving the subtrajectory balance for the new reward function, and introducing a few adaptations to the parametrization of the forward transition probabilities of GFlowNet.
The paper conducts experiments on small synthetic graphs to evaluate how well the posterior of the model of the graph and parameters matches the ground-truth. Moreover, experiments with data generated by linear/non-linear Gaussian Bayesian networks are also conducted, with a focus on evaluating different models' performance of predicting held-out data (likelihood).
Strengths: 1. Structure learning is an important area in machine learning. GFlowNets is a new and emerging framework for structure learning. There are only a few studies in this direction, which adds to the novelty of the paper.
2. The paper studies the problem of how to jointly learn parameters and structures in Bayesian structure learning with a single GFlowNet. The proposed solution to the problem is conceptually intuitive and technically sound, including the introduction of subtrajectory balance conditions and the parametrization of the forward transition probabilities.
3. The experiments of the paper are quite comprehensive. Benchmark datasets and recent advances are included. Multiple metrics are compared.
4. The quality of writing of the paper is good. The detailed derivations and settings are given. The paper is easy to follow.
Weaknesses: 1. Jointly learn parameters and structures in Bayesian structure learning with a single GFlowNet seems to be an interesting idea. But the benefit of doing so is less convincing in the paper. We have not seen theoretical or analytical study on why jointly learning them with one GFlowNet is useful. Empirically from the experiments of the paper, it is a bit hard to see that the joint learning scheme can help the quality of both the parameters and the structure. Specifically, the proposed method shows advantages on small graphs in Section 5.1, while its performance gain over others in Section 5.2 is a bit marginal in terms of the likelihood of held-out data. More importantly, one of the main purpose of structure learning is to reveal the true graph of the data, where the proposed method has not shown advantage over VBG (the method learns graphs with GFlowNet and parameters with variational inference, respectively) in Figure D.2. Therefore, it is a bit unclear whether people should learn them jointly.
2. The experiments in Figure 3 are about predicting held-out data. In the linear Gaussian case, BCD Nets shows very competitive performance. Given that BCD Nets and VBG are linear models, one can still use them to model data generated from non-linear Gaussian models and predict the held-out data, although it might be hard to use them to compare SHD or AUROC. These two methods may serve as informative baselines for the non-linear case (Figure 3.b).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is it possible to add the results of BCD Nets and VBG in the non-linear case of Figure 3.b?
2. Is it possible to provide complexity analysis between the proposed method and VBG?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their detailed review and comments on our submission.
> *But the benefit of doing so is less convincing in the paper. We have not seen theoretical or analytical study on why jointly learning them with one GFlowNet is useful.*
We invite you to read the general comment (**Advantage of joint posterior**).
> *Empirically from the experiments of the paper, it is a bit hard to see that the joint learning scheme can help the quality of both the parameters and the structure.*
Just to be clear, all methods considered in our paper (both baseline methods, and our own method JSP-GFN) are jointly learning the parameters and structure. Our objective is not to show that finding the joint posterior over structures and parameters $P(G, \theta\mid D)$ is advantageous compared to the marginal posterior $P(G\mid D)$. Our comparison with the exact posterior in Sec 5.1 on small graphs show a clear advantage of JSP-GFN over existing methods, and this is confirmed on the downstream performance on larger graphs in Sec 5.2; the difference is statistically significant against most methods on linear Gaussian models. On non-linear Gaussian, the downstream performance alone (Fig 3b) is not enough to conclude, although JSP-GFN seems to offer some benefit. That’s why we complemented this analysis with a comparison with the log-joint probability in Fig 3c, to show the quality of the approximation. Note that this analysis with correlation plots in Fig 3c, which is a reliable metric for Bayesian structure learning as it compares with the exact posterior, cannot be done on other methods since they only provide samples of the posterior approximation, and the log-probability $\log Q(G, \theta)$ cannot be evaluated.
> *More importantly, one of the main purpose of structure learning is to reveal the true graph of the data, where the proposed method has not shown advantage over VBG […] in Figure D.2. Therefore, it is a bit unclear whether people should learn them jointly.*
We agree that the objective of structure learning is to recover the true graph generating the data, under identifiability assumptions. However, we respectfully disagree that this is the objective of **Bayesian** structure learning: the objective here is to provide a faithful approximation of the posterior distribution over graphs, not to find a single graph that generated the data. We argue in our submission that metrics such as SHD and AUROC (which are standard metrics in the structure learning literature) are not representative of the quality of the posterior approximation in the main text (lines 321-322), and expand on it in App D.4.3 (lines 886-893).
One argument we give in the paper is that a method that would always sample the true graph would fare very well under these two metrics (with an expected SHD of 0 and an AUROC of 1), even though this would most likely be a very poor approximation of the posterior distribution $P(G\mid D)$. This would be a very good structure learning method, just not a good Bayesian structure learning one. That’s why we focused our analysis on metrics that do not compare with the ground truth graph (Figs 3 & D.3). As mentioned in our paper (lines 322-324), we included the expected SHD and AUROC in the Appendix only for completeness, since prior works on Bayesian structure learning still use these metrics.
For the comparison with VBG specifically, VBG performs significantly worse than JSP-GFN in terms of negative log-likelihood on held-out data in Fig 3a. But beyond performance, JSP-GFN offers three advantages over VBG (some of which are summarized in Table A.1):
- In its original formulation, VBG can only be applied to linear-Gaussian models. The authors mentioned that it is possible to apply VBG to non-linear CPDs, but this has not been theoretically or empirically validated yet (we mention this in our comparison with other methods in App A lines 503-504). On the other hand, JSP-GFN can naturally handle non-linear CPDs.
- JSP-GFN is conceptually simpler than VBG, where training simply consists in minimizing the objective in Eq 8 using stochastic gradient methods. On the other hand, training VBG consists in an inner-loop/outer-loop procedure, where the GFlowNet needs to the re-trained on a new reward function during the inner-loop, and the reward function is updated during the outer-loop. This introduces a number of hyperparameters, which might be hard to tune (e.g., number of training iterations for the GFlowNet during the inner-loop).
- It is impossible to use mini-batch training of the GFlowNet in VBG, due to the same limitations DAG-GFlowNet suffers from. This can limit the scope of application of VBG to cases where the size of the dataset is small.
> *Is it possible to add the results of BCD Nets and VBG in the non-linear case of Figure 3.b?*
Thank you for the suggestion. We only used methods that could be applied to non-linear models, to match the data generation process, following (Lorch et al., 2021).
However, we followed your suggestion and evaluated both BCD Nets and VBG (methods based on linear Gaussian models) on our non-linear datasets as well. The results in terms of negative log-likelihood on held-out data are as follows (we’re reporting median & 25th/75th percentile here):
| Algorithm | NLL (held-out) |
| --- | --- |
| M-MC3 | 2911.99 [1920.47, 3000.35] |
| G-MC3 | 4224.05 [2982.29, 5385.90] |
| DiBS | 3181.80 [2376.39, 4018.54] |
| BCD Nets* | 2714.62 [1832.27, 2998.93] |
| VBG* | 3030.28 [2311.04, 4805.90] |
| JSP-GFN (ours) | 3235.74 [2327.23, 3429.66] |
The rows with * denote the new results, and all other rows correspond to the results in Fig 3b. It is interesting to see that methods with linear Gaussian models perform well even on these datasets generated with non-linear functions, although the advantage is not statistically significant.
---
(Lorch et al., 2021) Lars Lorch, et al. DiBS: Differentiable Bayesian Structure Learning. NeurIPS 2021.
---
Rebuttal Comment 1.1:
Title: Additional response regarding complexity analysis
Comment: *(This comment contains an addendum to our rebuttal, due to the limited space)*
> *Is it possible to provide complexity analysis between the proposed method and VBG?*
Complexity analysis might be difficult to derive since both of these methods involve training neural networks. However as described above, JSP-GFN is conceptually simpler than VBG as it only involves training a neural network with stochastic gradient methods, as opposed to VBG which requires a more complex inner-loop/outer-loop procedure, each inner-loop itself consisting in training a neural network. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for taking the time to review our submission and for their valuable comments. While we are addressing specific questions in individual rebuttal comments, we would like to add a few points here which will be of interest for all reviewers and Area Chairs.
If any comment in the reviews has not been addressed in our individual rebuttal responses, this is due to the limited space available. We will provide these responses as soon as the discussion period starts.
### **Theory**
We would like to draw the attention of all reviewers and the Area Chairs here that **there is no theoretical flaw in our submission, including in the context raised by Reviewer 9gh3.**
### **Advantage of joint posterior**
We mentioned in the introduction of our submission (lines 37-46) why it is advantageous to consider the joint posterior $P(G, \theta\mid \mathcal{D})$ as opposed to the marginal posterior $P(G\mid \mathcal{D})$. However, as pointed out by reviewers rtyV & 5gCu, this could have been better highlighted in the paper.
To paraphrase what we said in lines 37-46, the main advantage of looking at $P(G, \theta\mid \mathcal{D})$ is that this allows us to consider Bayesian networks with more complex CPDs (e.g., non-linear CPDs, parametrized by a neural network). This has a significant impact in practice, since marginal posteriors are generally limited to linear Gaussian models (some exceptions exist, as mentioned in the paper in lines 32-36), and therefore approximating the joint posterior significantly increases the range of applications of Bayesian structure learning methods.
For the specific advantages of JSP-GFN over existing methods, we provide a table of comparison in Table A.1, where we compared our method to other methods based on variational inference and GFlowNets, on 7 different points: (1) if the method was approximating the joint posterior $P(G, \theta\mid \mathcal{D})$, (2) if the method could be applied to Bayesian networks with non-linear CPDs, (3) if the support of the distribution was guaranteed to be limited to the space of DAGs, (4) if the method could be applied to discrete data, (5) if the maximum number of parents for each node in the graph can be explicitly specified (common assumption in non-Bayesian structure learning), (6) if the method provides a method to sample from $P(G, \theta\mid \mathcal{D})$, and (7) if the method could learn the variational approximation based on mini-batches of data.
To the best of our knowledge, JSP-GFN is the only variational method that can approximate the joint distribution $P(G, \theta\mid \mathcal{D})$ for Bayesian networks with non-linear CPDs (including discrete random variables), while guaranteeing that the support of the distribution is limited to DAGs, which is essential for Bayesian networks. As such, it constitutes a major development in Bayesian structure learning.
### **Practical implementation**
While we described extensively all the necessary components of our method JSP-GFN in our submission, we would like to thank reviewers MmTz & jHMM for pointing out that the description of the practical implementation could be improved.
The way the policy (more precisely, the forward transition probability $P_{\phi}$) is parametrized is precisely described in Section 3.4; only the exact architectures of the neural networks used are not detailed (number of layers, number of hidden units), but we believe that they do not contribute to the overall description. The learning objective is described in Section 3.3, and explicitly formulated in Equation 8. Moreover, the way the behavior policy $\pi$ (used in the objective in Equation 8) is again described in details in Appendix D.1 (referenced in line 193 in the main text).
Training the GFlowNet then simply consists of standard stochastic optimization:
1. Draw samples $(G, \theta, G', \theta') \sim \pi$ according to the behavior policy $\pi$ as described in Appendix D.1, to obtain a Monte Carlo estimate of Equation 8;
2. Compute the different forward transition probabilities $P_{\phi}(G'\mid G)$, $P_{\phi}(\theta\mid G)$ and $P_{\phi}(\theta'\mid G')$ with the parametrization detailed in Section 3.4;
3. Evaluate the objective in Equation 8 (Section 3.3), and optimize it using stochastic gradient methods.
We can add this as a pseudo-code in the Appendix of the paper (the space in the main text unfortunately does not allow us to put it there). For example in Appendix D, in a new section before Appendix D.3.
The last component that is not described in out paper is how the transitions $G\rightarrow G'$ are obtained before putting them in the replay buffer, as described in Appendix D.1 (for the policy $\pi$). This is using a simple exploration strategy using $\epsilon$-sampling: we transition in the GFlowNet following $P_{\phi}(G'\mid G)$ with probability $\epsilon$, and select a new edge to add (among the valid edges) uniformly at random with probability $1-\epsilon$. We will add this as part of the pseudo-code above.
### **Code release**
While the source-code was not included as part of the supplementary material, as mentioned by reviewers jHMM & 9gh3, we have since released the code publicly. A link to an anonymized version of the code is available [here](https://anonymous.4open.science/r/jsp-gfn-AD5D/), and a link to the repository will be added to the final version of the paper upon acceptance. Our code is largely based on the code released by (Deleu et al., 2022) available [here](https://github.com/tristandeleu/jax-dag-gflownet).
---
(Deleu et al., 2022) Tristan Deleu, António Góis, Chris Emezue, Mansi Rankawat, Simon Lacoste-Julien, Stefan Bauer, Yoshua Bengio. Bayesian Structure Learning with Generative Flow Networks. UAI 2022. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The present paper extends upon prior literature in making use of GFlowNets for structure learning.
The extension is concerned with finding an adequate approach to learning a complete Bayesian network, that is, the graph's structure and its parameters. Theory to support the consistency of the representation and empirical results comparing to state of the art.
Strengths: _Note to the authors: I apologize in advance for being unable to provide an extensive, constructive feedback for your manuscript due to capacity issues on my end. Therefore, the confidence is being kept low. Also, there is an OpenReview bug regarding "First Time Reviewer," which sates 'Yes' although it is 'No.'_
* GFlowNets are an interesting alternative approach to sampling for various tasks of interest
* Learning a complete Graphical Structure is the optimum for structure learning
* Wonderful presentation, both theory and visualizations/schematics
* Self-enclosed work showcasing most/all relevant aspects to answering this research question
Weaknesses: * 'Just' learning the parameters brings a completion to the structure learning approach for GFlowNets but IMHO poses a more marginal/incrimental improvement
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * What are the key assumptions (or 'sacrifices') one has to take for the real world example?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: ---
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their review and their enthusiasm regarding our submission.
> *'Just' learning the parameters brings a completion to the structure learning approach for GFlowNets but IMHO poses a more marginal/incrimental improvement*
>
We invite you to read the general comment (**Advantage of joint posterior**).
> *What are the key assumptions (or 'sacrifices') one has to take for the real world example?*
>
One major consideration when working with real data is to make sure that our modeling assumptions (how we parametrize the CPDs) match the statistics of the data as closely as possible. In particular, gene expression data initially consists of counts that can take a wide range of values, and a standard preprocessing technique consists in transforming this data through a $\log(1+x)$ function to make it so that the data “looks Gaussian”.
However, since the raw data corresponds to counts, and a majority of those entries are 0 due to a lack of detection of a particular gene, this is also reflected after the transformation through $\log (1+x)$. Therefore, the processed data is not Gaussian, but closer to a mixture of a Gaussian and a Dirac at 0 (for these undetected genes): this is well modeled by a zero-inflated Normal distribution, as described in Equation D.29. Modeling this with only a Gaussian CPD (as is standard in the structure learning literature) would create a mismatch with the nature of the data.
But even disregarding the heavy bias towards the value 0, since counts are positives, the function $\log(1+x)$ transforms the non-zero counts into “Gaussian-like” data with a positive mean. However when working with linear-Gaussian CPDs, it is often standard to work with normalized data, with mean 0, to avoid having to introduce a bias term in the linear function. Here, we can’t easily standardize data due to the Dirac at 0. Therefore, it can be advantageous to consider a more expressive function for the CPD, such as a MLP, to even offer the option to capture complex biological mechanisms. JSP-GFN has the advantage to be able to work with such non-linear models (parametrized by a MLP) with non-standard distributions (zero-inflated Normal).
---
Rebuttal Comment 1.1:
Title: Reponse to Author Rebuttal
Comment: Thank you for the response. I've read "Advantage of joint posterior" and the argument convinces me more so than before of the benefits of having a joint posterior. Furthermore, thank you for giving your perspective on necessary assumptions on the statsitics of the data, when the data is real world collected. I've also read the argument by Reviewer 9gh3 on the "measure theoretic flaw" and I disagree, that is, I'm siding with your detailled argument that resolves the alleged problem.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your reply. We are glad to hear that our responses were helpful. | Summary: This paper uses GFlowNets to learn a generative model that samples from the posterior distribution of both the graphical structure and corresponding parameterisation of Bayesian networks given some observation. The paper demonstrates that sub-trajectory balance conditions suffice to ensure that the GFlowNet induce a distribution that is well proportional to the reward function associated with a final sample. Experimental results demonstrate that this approach is able to recover that follows closely the true posterior distribution. The method outperform alternative strategies for linear gaussian CPD and is competitive with alternative approach for non-linear Gaussian.
Strengths: - The problem statement and extension of Gflownets proposed to solve it are very sound. Jointly learning discrete structures and continuous parameterisation has many promising applications in various domains
- Although merging continuous and discrete GFlownets does not seem very original in itself, it requires to adapt the balancing conditions appropriately and making it work is a strong results in itself.
- Experimental results are very promising.
Weaknesses: Although I have found the idea presented in the paper of learning a generative model over the structure and continuous parameters very interesting, I also find the paper hard to read and understand technically for a reader non-familiar with GFlowNets. For instance, it is unclear exactly how the policy is parameterised and optimised. Although section 3.3 discusses the learning objective it does not provide a clear explanation on how exactly the training process happens, which makes it hard for me to argue for more than a weak accept. I consider important for a paper at a top venue to be clear by itself and to not require to check all supplementary materials and or related work to be understood by someone with the sufficient mathematical background. I would suggest authors to merge sections 2 and 3 in order to save space and give a more precise description of the parameterisation of the policy and training algorithms used. Moreover, I find strange that 5.3 is also so weakly developed in the main materials whereas it hints very important capacity of the proposed method, which might handle well interventional data.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Why did not you try more expressive parameterisation of the CPD, such as with normalizing flows?
- Why did you only consider N=100 in 5.2?
- If the edge feature is a probability why do you used MSE, rather than a better metric for comparing distributions such as the KL?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: /
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their review and their suggestions to improve the presentation of our method.
> *I also find the paper hard to read and understand technically for a reader non-familiar with GFlowNets.*
>
We are sorry to hear that the presentation was not sufficiently clear. Our objective was precisely to make the description of GFlowNets as self-contained as possible in the paper, in particular in Section 2. We introduced all necessary notions in Section 2.2 with a description of how objects are sampled from the GFlowNet, and described the prior work DAG-GFlowNet in Section 2.3 since our submission builds on top of it. We also included additional details about training objectives in Appendix C.1 & C.2.
> *For instance, it is unclear exactly how the policy is parameterised and optimised. Although section 3.3 discusses the learning objective it does not provide a clear explanation on how exactly the training process happens, which makes it hard for me to argue for more than a weak accept.*
>
We invite you to read the general rebuttal comment (**Practical implementation**) for a detailed response. We want to again emphasize that all the components were described in details in the main text.
> *I consider important for a paper at a top venue to be clear by itself and to not require to check all supplementary materials and or related work to be understood by someone with the sufficient mathematical background.*
>
We absolutely agree with the reviewer here, and that’s why we strived for making this submission as self-contained as possible. However with only 9 pages available, we made our best effort to provide a description of GFlowNets as extensive as possible in the main text, and we necessarily have to defer details to the supplementary material, to leave enough space for a complete description of our contributions (Section 3).
> *I would suggest authors to merge sections 2 and 3 in order to save space and give a more precise description of the parameterisation of the policy and training algorithms used.*
>
Section 2 discusses the necessary background for our submission (not our contributions), whereas Section 3 corresponds to our contributions, and as such we find that it would be confusing to merge these two sections to save space. Moreover, as mentioned above, the parametrization of the forward transition probability is described in great details in Section 3.4 already. What additional precisions would the reviewer like to be included?
> *Moreover, I find strange that 5.3 is also so weakly developed in the main materials whereas it hints very important capacity of the proposed method, which might handle well interventional data.*
>
Due to limited space, we had to sacrifice the analysis of our experiments on real biological data for legibility, and defer them to Appendix D.5.2 (referenced in the main text in line 350). We only provided a description of the experiments and their relevance, in particular for interventional data. Upon acceptance and with an additional page in the camera ready version, we propose to move the complete Appendix D.5.2 (including Figure D.3) back into the main text.
> *Why did not you try more expressive parameterisation of the CPD, such as with normalizing flows?*
>
Individually, each CPD may be simple (Gaussian, although with non-linear functions), but as a whole the Bayesian network can model quite a complex joint distribution. Using normalizing flows at the level of individual CPDs would be interesting, but it is not clear if this would significantly increase the expressiveness of the joint distribution. However, our method is not limited to Gaussian distributions (e.g., our experiments in Sec 5.3 on real data), and it could be applied to CPDs parametrized by normalizing flows too. We leave this as future work.
> Why did you only consider N=100 in 5.2?
>
We used $N=100$ in our experiments in Section 5.2 in order to match the experimental setups used in a vast majority of the literature on Bayesian structure learning (Cundy et al., 2021; Lorch et al., 2021; Deleu et al., 2022; Wang et al., 2022). The motivation for a small dataset size is to highlight the advantage of the Bayesian perspective to capture uncertainty about the graphical structures.
> *If the edge feature is a probability why do you used MSE, rather than a better metric for comparing distributions such as the KL?*
>
The edge features correspond to the marginal probability of a specific edge being in the graph, and is defined precisely in Equation D.4. This is not a probability *distribution* over edges, and therefore we can’t compare distributions using the KL divergence.
What we could do though is compare the (marginal) distribution over graphs returned by the GFlowNet ($P_{\phi}^{\top}(G)$, using the notation of Theorem 3.1, discarding $\theta$ as described in Appendix D.3.2 line 794) and the exact marginal posterior $P(G\mid \mathcal{D})$ using the KL divergence (or any other divergence, such as the Jensen-Shannon divergence for symmetry): $\mathrm{KL}(P_{\phi}^{\top}||P(\cdot\mid \mathcal{D}))$. However, we wouldn’t be able to compute this metric for most of the baseline methods, since they only provide samples of the posterior approximation, and not the full distribution (we could instead use the cross-entropy metric, as we do for the component in $\theta$).
---
(Cundy et al., 2021) Chris Cundy, et al. BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery. NeurIPS 2021.
(Lorch et al., 2021) Lars Lorch, et al. DiBS: Differentiable Bayesian Structure Learning. NeurIPS 2021.
(Deleu et al., 2022) Tristan Deleu, et al. Bayesian Structure Learning with Generative Flow Networks. UAI 2022.
(Wang et al., 2022) Benjie Wang, et al. Tractable Uncertainty for Structure Learning. ICML 2022. | null | null | null | null |
Achieving Cross Modal Generalization with Multimodal Unified Representation | Accept (poster) | Summary: This paper is focused on a novel and promising setting, which is the zero-shot generalization ability in other modalities that lacks annotations. It is meaningful in real-world scenarios even though it still requires paired multimodal pretraining. In addition, I agree with the authors that the ability of the network for unseen modalities is significant to the community, and the paradigm without fine-tuning encoders in the downstream tasks is also worth more attention.
Strengths: - Useful insights and meaningful technical direction
- Promising experimental results and detailed evaluation
- Detailed and sufficient analyses
Weaknesses: - The organization of the content. The manuscript lacks more promising proposals in the introduction. Obviously, the paragraphs in L58-75 can be greatly reduced.
- The visualization in Figure 4 contains very few instances. It's suggested to present a more comprehensive comparison.
- This paper is more focused on audio-visual understanding and introduces text to these tasks. The core contribution of generalizing to unseen modalities is not sufficiently evaluated.
- The ability of the network for zero-shot learning on seen modality is not evaluated. With strong pretraining based on paired data, I think that it has a strong ability for zero-shot learning. I strongly encourage the authors to add two significant experiments: i. Zero-shot classification on ImageNet ii. Zero-shot classification on AudioSet. It will significantly improve the paper's quality and impact, and I will consider improving my rating.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - According to Figure 3, it seems that text plays an important role in pretraining. It makes the contributions weaker, for which the improvements of this method can be attributed to the paired data of text, audio, and image. In other means, the dependency on paired data still limits the performance although the authors aim to reinforce the ability of unseen modality generalization.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation of this paper is mainly about current applications on audio-visual tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer w8Lb:
Thank you very much for taking time to read our paper and giving such insightful comments. Please see the following for your point-by-point response.
---
**Weakness 1. The organization of the introduction**
We greatly appreciate your useful comments to make our paper better. We have reorganize the description of the introduction, especially the content you mentioned in Line 58-75.
---
**Weakness 2. The visualization in Figure 4 contains very few instances.**
Sorry about the confusion, actually, we provide more visualization results of our model and baseline model in different training epochs, as shown in Figure 2 in appendix material. The results can more intuitively illustrate the process of our model gradually aligning different modalities in the latent space.
---
**Weakness 3. This paper is more focused on audio-visual understanding and introduces text to these tasks.**
Sorry about the confusion, in this paper, we do not focus on audio-visual understanding task, instead, we use various audio-visual tasks to demonstrate that our pre-trained model can directly generalize from seen modality to unseen modality. For example, in **cross-modal event localization** task, after obtaining pre-trained unified representation model, we use it as Encoder to train the event classification task on seen modality (e.g., audio), then during inference, we directly test the performance of event classification ability on unseen modality (e.g., video). Also we conduct downstream experiments on AVS-S4 dataset to demonstrate that our model can achieve zero-shot cross-modal (text2audio and audio2text) segmentation ability.
We think such these cross-modal generalization tasks can demonstrate that our method can successfully achieve multi-modal unified representation.
---
**Weakness 4. The ability of the network for zero-shot learning on seen modality is not evaluated.**
In this paper, the purpose of our pre-training is to align different modalities closer in the hidden embedding space, by using a set of intermediate discretized codes. Thus, after pre-training, different modalities but share the same semantics will aggregate to the same discrete latent codes. However, these discrete variables do not contain any knowledge of downstream tasks. To achieve zero-shot cross-modality knowledge transfer, the model has to train these discrete vectors using seen modality during downstream training. In conclusion, our model can not achieve zero-shot learning on seen modality.
However, our pre-train model can not only align different modalities, but also can converge features with similar meaning in the same modality into the same discrete variable. Thus, our model can achieve promising results in few-shot ability on seen modality in downstream tasks.
We conduct a simple few-shot experiments on UCF-101 dataset (video) and part of AudioSet (audio), the model is pre-trained on VGGSound-AVEL 40K.
**Video (UCF-101 dataset):**
| Model |1-shot |2-shot |4-shot |8-shot |16-shot|32-shot|
|---------------|-------|-------|-------|-------|-------|-------|
| baseline | 1.20 | 1.18 | 1.23 | 1.21 | 1.23 | 1.22 |
| w/o CrossCPC | 18.6 | 23.4 | 29.3 | 33.3 | 36.2 | 37.7 |
| Full model | 26.1 | 33.2 | 39.0 | 43.2 | 45.9 | 47.3 |
**Audio (AudioSet dataset):**
| Model |1-shot |2-shot |4-shot |8-shot |16-shot|32-shot|
|---------------|-------|-------|-------|-------|-------|-------|
| baseline | 1.17 | 1.23 | 1.21 | 1.23 | 1.24 | 1.17 |
| w/o CrossCPC | 23.6 | 31.2 | 38.0 | 42.8 | 46.5 | 49.2 |
| Full model | 32.4 | 44.5 | 51.8 | 56.9 | 58.0 | 60.6 |
From the results we can see that our method can outperform the baseline by a large margin under all setting, which can further illustrate that our model can aggregate the similar semantic features, whether they come from the same modality or different modalities. We thank the reviewer again for pointing this out, which can further improve the quality of our work. We hope the additional experiments will address your concern. We will add these few-shot experiments and analysis in the revised paper.
---
**Question 1. According to Figure 3, it seems that text plays an important role in pretraining**
Thank you very much for your careful observation, let we explain your question in detail. As you can see in Figure 3, the introduction of text will indeed improve the unified representation of audio-visual. This is because the introduction of a third-party modality will serve as a bridge to help the alignment of the other two modalities, which has also been mentioned in other papers, such as UniVAL [1]. When introducing the third mode, our proposed Cross-CPC and MM-EMA will let any two modes interact with each other to shorten the distance, which can facilitate better alignment among these modalities. The performance gain brought by the text does not mean that our contribution is weakened.
[1] UnIVAL: Unified Model for Image, Video, Audio and Language Tasks
---
**Question 2. In other means, the dependency on paired data still limits the performance although the authors aim to reinforce the ability of unseen modality generalization.**
Sorry about the confusion. In addition, we conduct a series of experiments on unpaired downstream datasets: transferring the event localization ability of the model from **seen modality in AVE dataset** to **unseen modality in AVVP dataset** (Line 242-245), as shown in the right part in Table 3. The results prove that even though the seen and unseen modalities come from different sources, our method can still guarantee a strong zero-shot cross-modal generalization ability with unsupervised pretraining. We will add more analysis about these in our paper.
---
We thank the reviewer again for your positive feedback. If you have any further questions or comments, please let us know, we are glad to respond.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for responding. After carefully reading these responses, I think that this paper can satisfy the standard of publications in NeurIPS. I will keep my rating and look forward to the presentation of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you very much! We are glad to know that our response addresses your concerns. We will revise our paper based on your insightful suggestions. | Summary: This paper proposes to learn a unified discrete representation from paired multimodal data during pre-training. During the downstream task, it can achieve zero-shot generalization ability in other modalities when only one modal is labeled. Specifically, it develops a Dual Cross-modal Information Disentangling (DCID) module and a Multi-Modal Exponential Moving Average (MM-EMA) to achieve the goal. Experiments are conducted on four tasks.
Strengths: 1. This paper is well-written and easy to follow.
2. The motivation is reasonable.
3. The proposed method is technically sound.
4. Experiments are sufficient.
Weaknesses: 1. The compared methods are out-of-date. The authors should provide more latest works for comparison.
2. Missing some related work of unified representation learning.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see above
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer bPma:
Thank you very much for your acknowledge of our paper. We are glad to answer your questions point by point.
**Weakness 1. The compared methods are out-of-date. The authors should provide more latest works for comparison**
Multi-modal unified representation is a challenging task which has not been studied much. The methods we compared are all the latest studies in explicit unified representation, CODIS (CVPR 22'), CMCM (ACL 22'), TURN (NeurIPS 22'). To achieve cross-modal generalization, discrete variables that can explicitly aggregate different modalities together are necessary. Thus in the experiments, we only compare our model with these latest explicit unified representation methods. Meanwhile, we will add more discussion of the latest implicit representation methods in our paper.
**Weakness 2. Missing some related work of unified representation learning**
Thank you very much for pointing this out. We will discuss some recent excellent implicit unified representation models in the related works. For example, as the Reviewer WtGp suggested, the MAE-based (MAViL, CAV-MAE) and CLIP-based (AVE-CLIP, AV-CLIP) multi-modal models, ImageBind, Zorro and XKD. All these works can align different modalities closer in latent embedding space. We will add the discussions of these works in introduction and relate work parts.
We thank the reviewer again for your positive feedback. If you have any further questions or comments, please let us know, we are glad to respond.
---
Rebuttal Comment 1.1:
Title: Thanks for authors' response
Comment: I have carefully read the authors' responses and other reviewers' comments. I think the authors address my concerns. I will keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you very much! We are glad to know that our response addresses your concerns. | Summary: The paper proposes a model to learn a unified discrete representation from paired multimodal data during pre-training. Then in downstream tasks, the model can achieve zero-shot generalization ability in other modalities when only one modality is labeled. The two key contributions are the Dual Cross-modal Information Disentangling (DCID) module and the Multi-Modal Exponential Moving Average (MM-EMA). These methods facilitate bidirectional
supervision between modalities and align semantically equivalent information in a shared discrete latent space, enabling fine-grained unified representation of multimodal sequences. Extensive experiments on various downstream tasks show the effectiveness of their method.
Strengths: 1. The authors performed extensive experiments to demonstrate the effectiveness of their method.
2. Their method can be applied to more than two modalities and can achieve zero-shot generalization ability in other modalities when only one modality is labeled in downstream tasks.
Weaknesses: 1. There are still some parts needs to be clarified, see Questions part.
2. It seems the new commitment loss (eq 8) is not in the ablation study. How does it compare with the original commitment loss in eq(3)?
3. It would be better to include same modality upper bound in downstream tasks
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Sg is not defined in eq3
2. How to select the negative set Zb is not clear. Is the size N-1? Then where does the randomness come from? In time steps?
3. The text prompts (Sec B.3) seems to be manually designed and are very specific to the event and datasets. Are there ways to easily generate prompts across different events in different datasets?
4. It would be better to also show some qualitative video segmentation examples of the baseline (compared) methods.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer y1JE:
Thank you so much for taking time to read our paper and providing valuable comments. We are glad to respond your questions point-by-point.
**Weakness. The comparison with original commitment loss**
Thank you for pointing out our missing ablations, we conduct a series of experiments with the model pretrained on VGGSound 40K to illustrate these:
Cross-modal event classification (AVE dataset):
| Model | V->A | A->V |
|---------------|-------|-------|
| Full Model | 47.7 | 52.3 |
| Eq(8)->Eq(3) | 45.1 | 48.2 |
Cross-modal event localization (AVVP dataset):
| Model | V->A | A->V |
|---------------|-------|-------|
| Full Model | 64.0 | 65.6 |
| Eq(8)->Eq(3) | 59.6 | 62.3 |
The results show that compared with original Eq (3), our new proposed commitment loss can achieve better multi-modal unified representation. We hope this additional experiment could strengthen our contribution.
**Question 1. Sg is not defined in eq3**
Sorry about this mistake. Sg in Eq (3) and (8) is the abbreviation of **Stop Gradient** that blocks gradients from flowing into its argument. We will revise this in our paper.
**Question 2. How to select the negative set Zb is not clear**
Yes, as we mentioned in Line 186, the size of negative samples is **N-1**, and these negative samples are randomly selected from other sequences within the same batch. Thank you for noticing this detail, we will add more clarification in our paper.
**Question 3. About the design of the text prompts**
Yes, it is a good question. In our paper, for simplified, we manually design several different prompt templates for each event category in VGGSound-AVEL and AVS-S4 datasets. Under your suggestion, we try to use ChatGPT to automatically generate a variety of corresponding prompts according to the characteristics of each event, and the quality is satisfactory.
**Question 4. It would be better to also show some qualitative video segmentation examples of the baseline methods.**
Thank you for pointing this out, we have added the qualitative video segmentation results of the baseline model. Please refer to the global PDF we just uploaded. We will also update the Figure 5 in our paper.
We thank the reviewer again for your positive feedback. If you have any further questions or comments, please do not hesitate to ask, we are glad to respond.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. I have read the rebuttal and other reviewers' comments. I think they address my concerns. I will keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you very much! We are glad to know that our response addresses your concerns. We will revise our paper based on your insightful suggestions. | Summary: In this paper the authors propose a novel task called Cross Modal Generalization (CMG), where they aim to use unlabelled internet scale paired multimodal data during pre training and then use it for zero shot generalization to other modalities in downstream tasks. The authors claim that disentagling modality specific features is crucial for allowing shared representations to generalize across modalities for which they propose a Dual Cross-modal Information Disentangling (DCID) module which incorporates 2 different aspects ie MI minimization between modal-agnostic semantic features and modal-specific features in each modality (CLUB) and MI maximization between modal agnostic semantic features across different modalities (Cross-CPC). They further propose Multi-modal Exponential Moving Average (MM-EMA) to achieve fine-grained cross-modal alignment in unconstrained scenarios.
The authors demonstrate the effectiveness of their approach on several downstream tasks and show significant improvements over the prior state of the art on standard benchmarks. They also present an elaborate ablation study to demonstrate the contributions of each of the components of their proposed approach and show how their proposed modules improve upon an otherwise weak baseline. They present their results on AVE, AVVP and AVS-S4 datasets and show the effectiveness of their approach on Audio-video, audio-text and audio-video-text modalities.
Strengths: - The paper is very well written and addresses several major challenges in the field of learning unified multimodal representations. It's claim of the necessity to disentangle modality specific features is well founded in literature and the method proposed to achieve it solid and sound.
- Thorough ablation studies are performed to highlight the use of each of the proposed method components
- Presentation of the different components and the overall setup is easy to follow and understand
- I like the formulation of the Dual Cross-modal Information Disentangling module. It takes cues from MI maximization theory of disentaglement and is able to formulate it in a simple and succint manner while being able to carefully optimize and train using a widely diverse training objective.
Weaknesses: - The paper doesnt include comparisons to a vast body of literature of Masked Autoencoders which have recently gotten very popular for learning joint vector embeddings. For example: Contrastive Audio-Visual Masked Autoencoder by Gong et al, MAViL: Masked Audio-Video Learners by Huang et al, Masked Autoencoders Are Scalable Vision Learners by He et al, Masked Autoencoders that Listen by Huang et al etc. Similarly the CLIP style approaches and their extensions to further modalities: AVE-CLIP, AV-CLIP, Audio-CLIP etc have not been discussed and compared against. Both these lines of methods have proven to be very effective in literature and should make for good comparison cases here (or discussed as to why they arent relevant comparisons for this work). Another recent work which expands to even more modalities is ImageBind: One Embedding Space To Bind Them All by Girdhar et al and should be compared against
- The suite of downstream tasks to be evaluated against need to be more diverse to truly evaluate if the semantic modality agnostic features have truly been captured by the representations.
- The work seems to be able to easily extended to text-visual modality as well for which there are several benchmarks and baselines available to compare against and would have made for a good comparison but that extension was never done. Curious as to why that combination wasnt considered?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - The work seems to be able to easily extended to text-visual modality as well for which there are several benchmarks and baselines available to compare against and would have made for a good comparison but that extension was never done. Curious as to why that combination wasnt considered?
-
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: - No major negative impacts or limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer WtGp:
Thank you very much for your insightful comments and your acknowledgement of our proposed Dual Cross-modal Information Disentangling module. Let us illustrate your questions point by point.
---
**Weakness 1. For comparison with MAE-based and CLIP-based method**
We much appreciate the reviewer for pointing out these important methods we missed. As you mentioned, the **MAE-based** and **CLIP-based** audio-visual methods can effectively align two modalities sharing the same semantics within a joint embedding space. These implicit unified representation methods can greatly advance the performance of cross-modal retrieval and multi-modal fusion tasks. However, these methods can not explicitly map different modalities together in the latent space, which may not obtain optimal performance in our cross-modal generalization downstream tasks. For example, we implement AVE-CLIP pre-trained methods on VGGSound-AVEL 40k and compare its performance with our methods on cross-modal event classification task (AVE dataset):
| Model | V->A | A->V |
|------------------|-------|------|
| AVE-CLIP | 24.8 | 27.8 |
| CMCM | 32.7 | 36.8 |
| Our Full Model | 47.7 | 52.3 |
From the results we can observe that such contrastive learning between audio-visual modalities is not sufficient for perfect multi-modal unified representation.
In recent years, explicit unified representation methods such as CODIS (CVPR 22'), TURN (NeurIPS 22'), and CMCM (ACL 22') have been introduced. These methods utilize discrete latent variables as bridges to explicitly merge different modalities into quantized codes, making them more suitable for this new task. Furthermore, TURN (NeurIPS 22') also contains a self-cross-reconstruction mechanism similar to MAE, we run their code on our task, and the results in Table 1 show that our methods can greatly suppress them. We much appreciate the reviewer for pointing this out, we have added these discussions of these methods and ImageBind in our paper.
---
**Weakness 2. The suite of downstream tasks to be evaluated against need to be more diverse to truly evaluate if the semantic modality agnostic features have truly been captured by the representations.**
Having said that, it is a very constructive suggestion. We totally agree with the reviewer for this opinion. In our paper, we conduct following experiments to demonstrate it:
1) We incorporate our DCID module with other state-of-the-art models, and the results in Table 1 show that with the equipment of DCID, all these model can obtain substantial improvements in downstream tasks, which can illustrate that our DCID method can effectively extract modality-agnostic features and facilitate all other compared models.
2) Furthermore, as shown in Figure 5, the visualization results illustrate that although there are huge domain gaps between text and audio modalities, the model trained based on text (or audio), can still accurately localize the right visual region when the query been replaced as audio (or text) during inference. These experiments show that our pre-train model can **effectively disentangle semantic modality agnostic features** from text and audio modalities.
Thank you very much for your valuable suggestions. We will design more diverse experiments to demonstrate that our model can truly capture the semantic modality agnostic features.
---
**Weakness 3 & Question. More experiments about text-visual cross modal generalization tasks**
We appreciate the reviewer for such constructive comment. Yes, our work can be extended to text-visual modality. Here we select **retrieval task** to demonstrate that our method can also be applied to **visual-text generalization** as well. We use audio as an inter-medium to measure the generalization ability of our model across these two modalities and implement an X-to-audio retrieval task. To be detailed, in the first stage, we train **visual-text unified representation** learning using VGGSound24K dataset, and then in the second stage, during downstream training, we let the model learn text(video)-audio retrieval, finally during inference, we directly test the generalization ability of the model on video(text)-audio retrieval. To simplify, we choose cross-attention network as our downstream retrieval model.
We test the retrieval performance on part (8k) of AudioSet dataset, the results are as follows:
v->t (v2a retrieval for training, t2a retrieval for test)
| Model | R@5 | R@10 |
|---------------|-------|-------|
| Baseline | 0.47 | 1.03 |
| Full Model | 10.3 | 21.9 |
t->v (t2a retrieval for training, v2a retrieval for test)
| Model | R@5 | R@10 |
|---------------|-------|-------|
| Baseline | 0.62 | 0.85 |
| Full Model | 8.47 | 16.7 |
The results show that compared with baseline model, our model can effectively achieve **zero-shot cross-modal audio retrieval** ability.
Due to the rebuttal time constraints, here we only compare our full model with baseline model, however, we will further perform visual-text benchmarks on other comparison methods and add these experiments to our paper. We really appreciate your comments, which make our paper more solid.
---
We thank the reviewer again for raising these insightful suggestions to make our paper better. If you have any further questions or comments, please let us know, we are glad to respond. | Rebuttal 1:
Rebuttal: Dear reviewers,
We much appreciate for your acknowledgement of our work and helpful, insightful comments. Following the reviewers' suggestions, we have made a major revision of the paper and conducted a series of new experiments to address the reviewers' concerns. We have also updated two figures in the single-page PDF file as suggested by two reviewers. In the following, under each reviewer's comment, we address the concerns of the reviewers point by point.
Pdf: /pdf/5e414b44ee6cab4eb66e3c42b9f7c256110e200d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper first introduces a new task called Cross Modal Generalization, which aims to learn a unified discrete representation from paired multi-modal data during pre-training, and realize zero-shot generalization in other modalities in downstreams tasks. This paper proposes Dual Cross-modal Information Disentangling (DCID) module and Multi-Modal Exponential Moving Average (MM-EMA) to facilitate bidirectional supervision between modalities and align semantically equivalent information in a shared discrete latent space. Experiments on various downstream tasks validate the effectiveness of the proposed methods.
Strengths: 1. Introduce a new task CMG, mapping various modalities into a unified discrete space.
2. Propose DCID and MM-EMA, extracting shared semantic information and project them into a common quantized latent space.
3. Performance significantly outperforms previous methods.
Weaknesses: 1. This paper only perform modality transfer on one pair of modalities, A & V.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The subfigure (b) and (c) in Figure 4 do not look much different. May consider a better visualization, such as the color of the points.
2. "sg" in Equation (3) and (8) is not explained.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors mention limitations that they only focus on the unified representations of three modalities and future works can explore more modalities.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer jARS:
We appreciate your positive feedback and providing very valuable suggestions. Let us respond to your questions point by point.
---
**For Weaknesses: This paper only perform modality transfer on one pair of modalities, A & V.**
Thanks for asking! Yes, most of our pretraining and downstream experiments are conducted on paired audio-visual dataset. However, to further demonstrate the effectiveness of our methods, we also pretrain our model on **audio-text** and **audio-visual-text** combinations. And the results on **downstream referring video segmentation** tasks (see Table 4, Figure 5 and Line 301-305) illustrate that our methods can also successfully learn a unified representation of audio & text, and transfer the video segmentation ability from seen modality (e.g., text) to unseen modality (e.g., audio) on AVS-S4 dataset.
In addition, we conduct a series of experiments on **unpaired downstream datasets**: transferring the event localization ability of the model from **one modality in AVE dataset** to **unseen modality in AVVP dataset** (Line 242-245), as shown in the right part in Table 3. The results prove that even though the seen and unseen modalities come from different sources, our method can still guarantee a strong zero-shot cross-modal generalization ability with unsupervised pretraining. We will add more analysis about these in our paper.
Furthermore, according to the suggestion of Reviewer WtGp, we also add a simple cross-modal retrieval experiment to demonstrate that our model can also be extended to visual-text. Please see the ''**For Weakness 3 & Questions**'' section of the reply to reviewer WtGp for more experimental details.
We hope these analysis and new experiments could address the reviewer’s concern and strengthen our paper.
---
**For Question 1: The subfigure (b) and (c) in Figure 4 do not look much different.**
Thank you very much for your constructive suggestions! As you mentioned, although our model can achieve better multi-modal alignment result than CMCM method, the difference between subfigure (b) and (c) is not obvious. We have replaced the color of both two modalities mapped codes **from purple to green**, please note that we have added these visualizations to the newly updated global PDF.
---
**For Question 2. "sg" in Equation (3) and (8) is not explained.**
Thank you for pointing this out. We apologize for omitting such important information. Sg in Eq (3) and (8) is the abbreviation of **Stop Gradient** that blocks gradients from flowing into its argument. We will revise this in our paper.
---
We thank the reviewer again for your positive feedback. We would be very grateful if the reviewer could take time to read our responses and let us know your thoughts. | Summary: The paper proposes a new pretraining task called Cross Modal Generalization (CMG) for learning unified multimodal representations. The goal is to map different modalities (e.g. audio, visual, text) to a shared discrete latent space during pretraining, such that the model can generalize to unseen modalities in downstream tasks when only one modality is labeled. The paper solves two issues: (1) unified semantic features shared cross-modalities. (2) representing these semantic features using a unified codebook. The paper mainly contributes to the first aspect.
Strengths: - New task is brought to the community and the task makes sense to me because in reality, only partial data is labeled and other with them is largely not.
- DCID and MM-EMA are intuitive methods for disentangling and aligning semantic information across modalities. Using mutual information optimization and teacher-student aggregation.
- Experiments and ablation studies are quite solid.
Weaknesses: - I didn't find weaknesses.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - In the formula (1), are the $\Phi^{a}$ and $\Phi^{b}$ the same in the implementation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: - None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer kKgn:
Thank you very much for your acknowledgement of our methods and giving us positive feedback! We will respond to your question as follows:
**In the formula (1), are the ${\Phi}^{a}$ and ${\Phi}^{b}$ the same in the implementation?**
Sorry about the misunderstanding. For different modalities, ${\Phi}^{a}$ and ${\Phi}^{b}$ are not always the same in the implementation. For a given visual feature $V \in R^{B\times T \times H \times W \times C}$, we first apply average pooling operation on it and obtain $\bar{V} \in R^{B\times T \times C}$. We follow the Spatial-Channel Attention proposed in CMRAN [1], which use the $\bar{V}$ to calculate attention score with $V$ in spatial and channel level, then we apply self-attention in temporal level and get the final visual semantic results as $Z^{v} \in R^{B\times T \times C}$. For audio and text input feature, we directly use a self-attention layer in temporal level to get the final audio or text semantic features as $R^{B\times T \times C}$. Thank you very much for pointing this out, we will add these implementation details in appendix.
We thank the reviewer again for your very positive feedback! If you have any further questions or comments, please let us know, we are glad to respond.
[1] Cross-Modal Relation-Aware Networks for Audio-Visual Event Localization
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The line 144 says that $\Phi^a$ and $\Phi^b$ are for the *modal-agnostic features*, if they are different, could you explain why do they produce modal-agnostic features? I thought the modal-agnostic encoders would be a general feature extractor for any modalities.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. It is a very good question!
As you mentioned, previous works often use a general feature encoder to represent different modalities and extract modal-agnostic features, and achieve good results in multi-modal implicit representation. However, in this work, although the **modal-agnostic encoders** used in our paper have different structures, our proposed **Cross-CPC** can offer cross-modal supervision, which means the fine-grained cross-modal predictions can force ${\Phi}^{a}$ and ${\Phi}^{b}$ to know which parts are more relevant to other modalities, and we believe these aspects of information can be regarded as **semantic information or modal-agnostic information**.
Meanwhile, the **mutual information minimization loss** and **reconstruction loss** will make the modal-specific encoders (${\Psi}^{a}$ and ${\Psi}^{b}$) to extract **modal-specific information** that are unrelated to other modalities. By this way, our model can disentangle the original input information into two parts, one is modal-agnostic information, or semantic information (${\Phi}^{a}$ and ${\Phi}^{b}$), which we believe is highly related to other modalities. The other part is modal-specific features (${\Psi}^{a}$ and ${\Psi}^{b}$), which do not supply useful information for our unified representation, but is vital for original feature reconstruction.
In conclusion, our proposed DICD module (especially the Cross-CPC module) will guide the ${\Phi}^{a}$ and ${\Phi}^{b}$ to extract modal-agnostic features from different modalities. | null | null | null | null |
Trans-Dimensional Generative Modeling via Jump Diffusion Models | Accept (spotlight) | Summary: This paper proposes a new diffusion model based on jump diffusion processes. Compared with previous discrete and continuous formulations, the model introduces the usage of a transition kernel, which models the jump process in a semantically meaningful manner. The method absorbs standard constructions like diffusion guidance and produces good results on molecule and robot arm video tasks.
Strengths: * The proposed methodology is elegant and theoretically sound.
* Many constructions like diffusion guidance extend naturally.
* The proposed methodology is widely applicable, as many natural data types include a continuous portion (modeling the space) as well as a discrete but unknown component.
Weaknesses: * For the molecule task, the metrics are based on molecular validity properties. This seems like it wouldn't account for overfitting from the model, which seems more relevant for a generative modeling task. In particular, can you report a diversity/non-memorization metric?
* The robot arm example, while sufficient for the paper, is rather toy.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Nothing more than the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their review and positive feedback. We are very pleased to hear that our approach is considered to be elegant and theoretically sound with wide applicability. We address the specific comments from the review here.
> *For the molecule task, the metrics are based on molecular validity properties. This seems like it wouldn't account for overfitting from the model, which seems more relevant for a generative modeling task. In particular, can you report a diversity/non-memorization metric?*
Thank you for this suggestion, we have investigated sample diversity and novelty and will include these results in an update to the paper. On the molecule task, we can measure uniqueness by computing the chemical graph corresponding to each generated sample and measure what proportion of the 10000 produced samples have a unique chemical graph amongst this set of 10000 as is done by Hoogeboom et al. 2022. We show our results in Table 1 in the additional results pdf and find our method without any ablations has only slightly lower levels of uniqueness when compared to the fixed dimension diffusion model baseline. Measuring novelty on generative models trained on the QM9 dataset is challenging because the QM9 dataset contains an exhaustive enumeration of all molecules that satisfy certain predefined constraints as noted by Vignac et al. 2022 and Hoogeboom et al. 2022. Therefore, if a novel molecule is produced it means the generative model has failed to capture some of the physical properties of the dataset and indeed Hoogeboom et al. 2022 found that during training, as the model improved, novelty decreased. Novelty is therefore not typically included in evaluating molecular diffusion models. For completeness, we include the novelty scores in Table 1 as a comparison to the results presented in Hoogeboom et al. 2022 Appendix C. We find that our samples are closer to the statistics of the training dataset whilst still producing ‘novel’ samples at a consistent rate.
## References
Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. *Equivariant diffusion for molecule generation in 3d.* International Conference on Machine Learning, 2022.
Clement Vignac and Pascal Frossard. *Top-n: Equivariant set and graph generation without exchangeability.* International Conference on Learning Representations, 2022
---
Rebuttal Comment 1.1:
Title: Thank you for your response!
Comment: Thank you for clearing up any confusion that I had. | Summary: This paper focuses on varying dimensional datasets and proposes a novel generative model to solve the varying dimensional problems. The proposed model is theoretically valid and has an interesting and novel contribution to extending the traditional score-based generative model by generating both state values and dimensions jointly during the generative process, which is idea-simple but effective. Experiments on molecule generation and video generation both show this model's effectiveness.
Strengths: 1. The idea of jointly modeling the state and dimension, in particular, the idea of using the intensity function to model the jump distribution, is interesting and novel.
2. The theoretical contribution is valid and the experiment is thorough in supporting the proposed model.
Weaknesses: None.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I am confused about how to properly define $K^{{del}}(i|n)$ in $\overrightarrow{K}_t(\mathbf{Y}|\mathbf{X})$? Any clarification about this issue will be helpful.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and we appreciate the praise for our method’s novelty and thorough experiments. We address your questions below.
> *I am confused about how to properly define $K^\text{del}(i | n)$ in $\overrightarrow{K}_t(\mathbf{Y} | \mathbf{X})$ Any clarification about this issue will be helpful.*
$K^\text{del}(i | n)$ will dictate the ordering in which dimensions are deleted in the forward noising process. For example if $K^\text{del}(i | n) = \mathbb{I} \\{ i = n \\}$ then it is always the final dimension of the datapoint that is deleted when a jump occurs. Alternatively, you can set $K^\text{del}(i|n) = 1/n$ so that the dimension to be deleted is chosen uniformly at random when a jump occurs.
The choice of $K^\text{del}( i | n)$ impacts the reverse generative process. If $K^\text{del}(i | n) = \mathbb{I} \\{ i = n \\}$ then datapoints are constructed in an additive way, each new dimension is simply appended onto the end of the current datapoint. This is similar to how a text based autoregressive model builds sentences by generating new words and appending them onto the end of the current sentence. Instead, if $K^\text{del}(i|n) = 1/n$ then during the reverse generative process, the generative model first picks a suitable place to add a new dimension and then inserts one at that chosen point.
The choice of $K^\text{del}( i | n)$ depends on the dataset and problem being tackled. In our case, we choose $K^\text{del}(i|n) = 1/n$ because for molecules there is no natural notion of a ‘final dimension’ due to permutation invariance of the point cloud. Further, for our videos we want to condition on the first and last frame thus an autoregressive appending style of generation would be unsuitable. We present our framework in a general manner so it can be easily applied to other problems where other choices of $K^\text{del}( i | n)$ could be more suitable.
We will make this choice clearer in an update to the paper, thank you for raising this point of confusion.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response! The author's reply cleared up my confusion. For this reason, I improved my score further. | Summary: This paper addresses the problem of modelling data of various dimensions. This is achieved by generalising diffusion models as jump diffusion processes, allowing the content and dimension of data to be jointly modelled. The forward process gradually corrupts the data with gaussian noise while also gradually deleting dimensions until a single normally distributed dimension remains. The reverse process learns both the score function, when to add a dimension, and what data to place in the newly created dimension. Experimental results on molecule and video generation tasks show that the approach can well represent the distribution of data dimensions, while outperforming/being competitive in terms of sample quality.
Strengths: - The proposed approach of modelling the generative process as a jump diffusion process is interesting and is a sensible solution.
- The paper is very clear, easy to read, and to my understanding, the method is technically sound. I particularly like the intuitive explanation of the loss in Equations 3-4; and predicting the original data dimension (line 191) is a good idea to address the optimisation issues.
- Experimental results demonstrate the effectiveness of the proposed approach (Table 2), performing comparability to, or outperforming the baseline which sample dimension prior to sampling. I also like the evaluation of the impact of setting $\lambda=0$ (dimension deletion/insertion rate) towards the end of the diffusion process.
- It is shown that reconstruction guidance can be used to generate molecules with specified features (Table 3) much better than dimension independent approaches.
- The problem of jointly modelling content and dimension of data is an important one to address. In my opinion, the proposed solution is compelling and I believe it will be very useful to others.
Weaknesses: - Predicting the content of the newly created dimensions with a gaussian distribution is limiting, potentially creating a discrepancy between train/test time and requiring more diffusion time to correct.
- Fixing $\lambda=0$ towards the end of the diffusion process to ensure added dimensions have time to diffuse is a weakness of the proposed approach. Particularly for more complex data, there is no guarantee that the remaining time will be sufficient.
- Finally, it would have been nice to see a perceptual quality metric for the video generation task, this would offset the above weaknesses.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Do the authors have any rationale/evidence that fixing $\lambda$ at $t<0.1$ is a reasonable choice? For instance, graphing metrics over a variety of limits.
- Similarly, is there rationale why predicting new dimensions with a gaussian is sufficient; or if for more complex data this is problematic, is there a more expressive extension?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations are well discussed throughout the paper. The previously discussed weaknesses could be mentioned as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their review and positive comments on the paper. We are especially happy to hear that our proposed method is considered to be a compelling solution to an important problem. You have raised important points regarding the new dimension distribution and the time needed to diffuse at the end of the process. Since these are related issues, we answer both here.
We first emphasize that our framework can be seen as an integration of autoregressive models with diffusion models. In the limit of no diffusion and only jumps then we arrive at a pure autoregressive model whereas if we include diffusion but initialize each new dimension with $\mathcal{N}(0, I)$ and create all dimensions at the start of the reverse generative process then we arrive at a pure diffusion model. We show in the paper how there is a fertile middle ground where we can derive benefits from both types of model with the diffusion part giving good sample quality and the autoregressive part giving trans-dimensional generation capabilities.
There is a lot of flexibility in the choice for the new dimension creation distribution which we refer to as the autoregressive distribution. We chose to parameterize it with a Gaussian in the paper for simplicity but it can be parameterized with any likelihood model because the training signal for learning the autoregressive distribution is simply a maximum likelihood objective for predicting the missing part of the data given the observed part. Therefore, depending on the task, we could use more expressive alternatives such as normalizing flows or a G-SchNet architecture for molecules (Gebauer et al. 2019) which predicts new atom positions using discrete probabilities over binned distances.
In our experiments we found that a reasonably simple and effective approach is to have the network predict mean and standard deviation statistics for a Gaussian distribution and then refine this with the diffusion part of the process. In our preliminary experiments we found that $t=0.1$ to $t=0$ is sufficient time to clean up any errors and produce high quality samples whilst retaining the trans-dimensional nature of the model and avoiding instabilities near $t=0$.
To demonstrate this effect, we have run a sweep over when to set $\lambda_t$ to $0$ on the molecule task, see Table 2 in the additional results pdf. We have found that the setting $\lambda_{t < 0.03T} = 0$ to generate reasonable sample quality but incur some extra dimension error due to the generative process sometimes observing a lack of dimensions near $t=0$ and adding too many dimensions. We observed the same effect in the paper (L276) when setting $\lambda_t$ to be constant for all $t$. Further, the setting $\lambda_{t < 0.3T}=0$ also results in increased dimension error due to there being less opportunity for the guidance model to supervise the number of dimensions. Hence, we believe the setting $\lambda_{t<0.1T}$ to be reasonable in our case. Note these models have not trained for as long as the ones in the paper due to time constraints during the rebuttal period.
We agree that the specific balance could be different for different datasets and model architectures and ultimately this is a case of hyperparameter selection. Further, if it is found that a small amount of diffusion time is insufficient to generate high quality samples, then the expressivity of the autoregressive part can be arbitrarily increased, the only downsides being an increased computational cost and code complexity.
Thank you for the suggestion of perceptual metrics on the video dataset. To investigate the relative quality of frames that are added near the start of the diffusion process versus those that are added near the end, we calculated the FID for individual frames, grouped by when they are added in the generative process. Our results can be found in Table 3 in the additional results pdf. We find no systematic trend meaning there is no degradation in quality for frames added near the end of the generative process. Note that the absolute value of these FIDs may not be meaningful due to the RoboDesk dataset being far out of distribution for the Inception network used to calculate FID scores. We can visually confirm good sample quality from e.g. Figure 5 in the paper.
## References
Niklas W. A. Gebauer, Michael Gastegger, Kristof T. Schütt. *Symmetry-adapted generation of 3d point sets for the targeted discovery of molecules.* Advances in Neural Information Processing Systems, 2019.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thanks a lot to the authors for their thorough responses. The rebuttal addresses my concerns and I am still of the opinion that this is a strong paper and advocate acceptance.
In particular, the provided FID scores added in the rebuttal pdf showing that the final video frames to be generated are of comparable quality, is compelling evidence that the Gaussian approximation is sufficient in this setting. It is also true that should it prove to be problematic, a more expressive model could be used. I also appreciate the extra experiment on $\lambda_t$ showing that $0.1T$ is a reasonable choice and the impact on sample properties when this value is changed, and agree that this is a reasonable additional hyperparameter to have. | Summary: This paper proposes jump diffusion, which is a novel diffusion model to handle data with varying dimensions. The proposed method is derived from a special forward process that contains a jump part that changes the dimension of the generated samples. The corresponding backward process and the learning objective are derived, with two more components that need to be learned other than the standard diffusion drift. Numerical issues are properly handled to make the mathematical model work in practice. Experiments over various application scenarios are conducted to demonstrate the flexibility and versatility of the proposed framework.
Strengths: 1. The proposed method targets at a very important problem. It is a great contribution to invent theoretically sound diffusion models to deal with data with varying dimensions. This kind of problem frequently happens in real-world applications.
2. The motivation is strong, the idea is reasonable, and the mathematical derivations makes the idea a solid framework.
3. Applications on molecule and video generation showcases the potential of the proposed framework in reality.
Weaknesses: 1. Although I do think the proposed framework is interesting, generation with varying dimensionality can be addressed easily by
(1) learning a distribution over the dimension numbers and sample dimension number from the learned distribution (2) sampling initial random noises according to the sampled dimension number at the beginning of the generation and pad the unused dimensions with 0.
This simple method has little modification over existing diffusion models, and the only extra efforts one need to do is to add paddings during training.
It would be very nice if I can hear comments from the authors on this simple baseline. I do appreciate the theoretical contribution of this paper, and it would be better if the authors can compare with this simple baseline .
2.In L84, should the summation be taken over m<n? Because there is only deletion in the forward process.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I have one question on the proposed diffusion model and I would appreciate answers from the authors.
In Algorithm Box 1, what is the initial distribution? Is it a random standard Gaussian noise with only 1 dimension? If so, how do we decide which dimension is the initial dimension? For example, if we have x = (3, x_1, x_2, x_3), what is the initial random noise? (1, N(0, I), 0, 0), (1, 0, N(0, I), 0), or (1, 0, 0, N(0, I) ? Or are they uniformly sampled?
Moreover, if the initial number of dimension is always 1, how do we guarantee that in setting the forward process? I feel like there must be some constraints on the scheduling of rate function $\lambda(t)$.
Maybe I missed some parts of the derivation in the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations and broader impacts are properly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their engagement with our proposed methodology and thoughtful questions. We are grateful that our work is considered to be a solid framework to tackle a very important problem. We answer questions and respond to comments below.
> *It would be very nice if I can hear comments from the authors on this simple baseline.*
We focussed in this work on training flexible unconditional models that jointly model all the relevant aspects of the data: the state and the dimension. This is in contrast to the mentioned baseline that models the state and dimension separately with two separate models. We showcase the benefit of the joint modelling approach by using the task of diffusion guidance where an unconditional model is first trained and then, at test time, different researchers with different goals can condition this model on their task of interest using diffusion guidance, for example Weiss et al. 2023 and Crowson 2021. This has the advantage that, should the unconditional model be powerful enough, any conditional generation task can be easily accomplished with a limited computational budget because no re-training is required, the only thing needed is a guidance model. In the case of the baseline that you mention, complete knowledge of the final generation task is needed before training because the dimension prediction model and score model both need to be conditioned on this task information. Therefore this type of approach is not applicable to our intended application where the end user does not have the resources for a complete re-training of the models for the tasks they are interested in.
Motivated by this deficiency of models that treat dimensions and state values separately, we developed a generative model that jointly models dimension and state meaning it is powerful enough to be used as an unconditional model that can then be guided at test time for different generation tasks. In this case where the end user is guiding the model on their task of interest, the fixed dimensional model fails to capture the correct dimension information whereas our method can produce more accurate dimensions statistics for each desired task as we show in Table 3 in the paper.
> *In L84, should the summation be taken over m<n? Because there is only deletion in the forward process.*
Thank you for pointing this out, you are correct that for our forward process $K_t( m , \mathbf{y} | \mathbf{X})$ will be zero for $m \geq n$ due to the forward process only deleting dimensions. For the backward process we would instead have $K_t(m , \mathbf{y} | \mathbf{X})$ being $0$ for $m \leq n$. On L84 we introduce jump diffusions for the first time and so try to keep the formulation general enough to cover both the forward and backward cases. However, given that this introduction is in the forward process section this could lead to confusion and we will make this clearer in a revision to the paper.
> *In Algorithm Box 1, what is the initial distribution?*
The initial distribution is indeed a single dimension of standard Gaussian noise. However, we do not need to pre-determine which dimension in the final generated datapoint it corresponds to. We treat it genuinely as only a single dimension i.e. the initial sample is $(1, \mathcal{N}(0, I))$ with no baked in knowledge of the length of the final datapoint to create nor the initial dimension’s place in that final datapoint. Then, when a new dimension is added it could be added to the left or to the right becoming $(2, z, \mathcal{N}(0, I) )$ or $(2, \mathcal{N}(0, I), z)$ where $z$ is the new value to be added. This process is repeated until we reach the terminal time $t=0$ of the reverse generative process. Therefore, that initial dimension could correspond to any of the dimensions in the final generated datapoint depending on where new dimensions are inserted. Furthermore, the final number of dimensions also varies depending on how many jumps occurred during generation. In your example e.g. $(1, 0, \mathcal{N}(0, I), 0)$ this assumes the final dimension is 3 but we make no such assumption in our method.
> *Moreover, if the initial number of dimension is always 1, how do we guarantee that in setting the forward process? I feel like there must be some constraints on the scheduling of rate function*
This is a good point and indeed this constraint influences the choice of forward process rate function, $\lambda_t$. We set it such that, with high probability, the number of dimensions at the end of the forward process is $1$. This is the same idea as in fixed dimension diffusion models where the noising process is such that the corrupted data at the end of the noising process is approximately distributed according to $\mathcal{N}( \mathbf{x}; 0, I)$. Here, we have a noising process such that the corrupted data at the end of the noising process is approximately distributed according to $\mathbb{I} \\{n = 1\\} \mathcal{N}(\mathbf{x}; 0, I)$. In practice we achieve this by setting $\lambda_t$ large enough such that there is a high probability of enough jumps occurring to remove all but one of the dimensions in the datapoint. Further, we set $\lambda_t$ to $0$ when the number of dimensions is equal to $1$ so that we can’t delete down to $0$ dimensions.
## References
Tomer Weiss, Luca Cosmo, Eduardo Mayo Yanes, Sabyasachi Chakraborty, Alex M Bronstein,and Renana Gershoni-Poranne. *Guided diffusion for inverse molecular design.* ChemrXiv, 2023
Katherine Crowson. *Clip guided diffusion.* Web Demo, 2021
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for your comments comparing between your model and my simple baseline! The joint modeling of both dimensionality and state is indeed important. I agree. Other replies also clearly answer my questions. Although the initial distribution in the backward and forward process has a little bit mismatch, I think the way the authors handle it can mitigate the influence.
Thank you for the reply again! | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their analysis of our paper and very helpful reviews. We were pleased that reviewers considered our work to be a novel and theoretically sound method for tackling the important problem of modeling data with varying dimensionality.
We address comments made by each reviewer in individual responses and attach an additional results pdf here.
Pdf: /pdf/acc61ff6553ea64dd03f5e99afd200dce5f649ab.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
QuadAttac$K$: A Quadratic Programming Approach to Learning Ordered Top-$K$ Adversarial Attacks | Accept (poster) | Summary: The paper proposes a new method for generating top-K adversarial perturbations -- modifying the input such that the classifier predicts the specified K classes, in order. The work addresses this with a two-stage approach, where the first stage computes an adversarial perturbation to the representation, subject to the top-K constraints, and the second stage modifies the input to match this representation.
Strengths: The paper makes a case for top-K adversarial perturbation identification as a step towards more robust systems -- since such perturbations are harder to detect, and a model that is robust to them would be harder to fool. The optimization problem, which is the quadratic objective combined with linear constraints on the non-linear model outputs, is hard to solve hence the proposed two-stage approach which allows the use of methods beyond gradient descent -- specifically, QP on the representation, followed by gradient-based input modification to match this representation.
The method is simple and well-motivated, and comparisons with prior works show that the proposed method is able to perform well where the prior work fails (for the same search budget -- Adversarial Distillation is able to find the adversarial top-K perturbations but at a higher cost).
Weaknesses: - While the paper clearly argues on the value of ordered top-K attacks, it is less clear what might be the alternatives that achieve similar goals but may be easier to optimize
- Second row of (8) seems to have too many non-zeros
- In (12), the last occurrence of D_T is missing a B term
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What alternatives, if any, are the authors aware of to the ordered top-K objective, which achieves the same goal (attacks that are harder to detect based on the dependence between predicted class probabilities)?
- The paper notes that qpth is a differentiable QP solver. Is the differentiability used?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and efforts reviewing our submission. We address your concerns as follows.
**Comment 1:** Alternatives to the Ordered Top-K Attack which are easier to optimize.
> **Response:** As discussed in our global response, ordered Top-$K$ adversarial attacks exploit the principle of \"Class Coherence,\" recognizing the relationships or logic connecting classes within ordinal or nominal frameworks. Unlike unordered or Top-1 attacks, ordered Top-$K$ attacks can subtly manipulate predictions while maintaining coherence within the expected context.
> With respect to alternative easier-to-optimize methods of learning ordered Top-$K$ attacks, one potential direction could be to learn more informative ordered Top-$K$ satisfying distribution to extend the AD method, rather than using the heuristic design method. To that end, a generative model may be trained to become an optimal transport between an easy to sample space with no topological holes onto the space of images that generate class coherent scores (e.g., a bijection from $[-1, 1]^N$ onto the space of images class coherent logit vectors) and a less refined adversarial attack may optimize on the input space of this generative model. By doing this, only perturbations that do not disturb the expected or logical inter-class relationships in the predicted logit vectors would be explored. This would rely on the unlikely requirement that the generative model would span meaningful perturbations, and further a generative model would have to be separately trained for each target model (e.g. ResNet-50).
**Comment 2:** Is the differentiability of qpth used?
> **Response:** Thank you for the very observant question. Yes, we make us of the differentiability of qpth. Imagine the minimizer to Eqn. 6 in our paper is given by the function $\delta_{min} = G(x)$. Then we can phrase our loss as $L = \| x - \hat{x} \|, \hat{x} = x + \delta_{min}$ or equivalently $L = \| \delta_{min} \|$. The differentiability of qpth allows us to directly use the gradient of our perturbation with respect to our feature vector $\frac{d}{dx}[\| \delta_{min} \|]$ for optimization. On the other hand a non-differentiable solver would force
us to treat $\hat{x}$ as a constant and minimize the loss $L = \| x - C \|$ where $C$ is a constant. While our localized quadratic program is convex, our loss in general is not and thus a loss with a constant target may force use to follow an ever moving target, if we minimize the distance to our solution, then the target may actually become even further in our next iteration. Having access to the gradient $\frac{d}{dx}[\| \delta_{min} \|]$ keeps us from having to rely on a surrogate that may not have a good picture of how the solution the QP itself changes as we move in the search space.
**Comment 3:** Presentation and typos.
> **Response:** Regarding noted issues, our matrix in Eqn. 8 indeed has too many nonzero items in the second row. The item in row 2 column 4 should be a 0 and the whole matrix should be as follows,
$$\begin{aligned}
D_T = \begin{bmatrix}
0 & 1 & -1 & 0 & 0; \\
-1 & 0 & 1 & 0 & 0; \\
1 & 0 & 0 & -1 & 0; \\
1 & 0 & 0 & 0 & -1 \\
\end{bmatrix},
\end{aligned}$$
> With respect to a missing term in Eqn. 12, we agree there is a missing bias matrix term. The formulation in line 287 though, is correct. Eqn.12 will be corrected to,
$$D_T\cdot (A\hat{z} + B) > 0 \quad \Rightarrow \quad -D_T\cdot A\hat{z} \leq D_T\cdot B - \eta$$
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have read it and the rest of the discussion here, and am keeping the original rating.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you again for your great efforts and time reviewing our submission and checking our rebuttal. | Summary: It identifies that while sufficient to capture top-K attack constraints, hand-crafted surrogate losses are not necessary and often introduce inconsistency and artifacts in optimization. It eliminates the need of introducing surrogate losses. Instead, it keeps the top-K attack constraints in the vanilla form and cast the optimization problem as quadratic programming (QP). It solves the QP by leveraging a recently proposed differentiable QP layer (for PyTorch).
Strengths: It observes that directly minimizing the lp norm of the learned perturbation together with the hand-crafted surrogate loss could miss the chance of exploiting semantic structures of the feature embedding space ((i.e., the input space to the final linear classifier). Instead, it minimizes the Euclidean distance between the feature embedding vectors at two consecutive iterations in the optimization. This can be understood as the latent perturbation learning versus the raw data perturbation learning. Its proposed latent perturbation learning enables more consistent optimization trajectories in pursuing the satisfaction of the specified top-K attack constraints. The minimized Euclidean distance is then used as the loss together with the lp norm of the learned perturbation in computing the adversarial perturbation via back-propagation at each iteration.
With the proposed QP formulation, it aims to learning top-K attacks efficiently in terms of the computing budget. It eliminates searching the trade-off parameters. Instead, it uses the low-cost 1×S setting for better practicality (e.g. S = 30 or 60), i.e., using a default trade-off parameter. For large K’s (e.g., K > 10), it shows that the QuadAttacK can still achieve appealing ASR, while the prior art completely fails.
Weaknesses: The presentation can be improved. For example, the second row of $D_T$ in equation 8 seems to be incorrect. The expression in equation 12 does not seem to be correct, which is inconsistent with line 287.
It is still not clear how the proposed attack can be applied in practice. Although it lists some advantages of the successful ordered top-K attacks such as some potential directions, it does not provide practical example usages. It is better to discuss the potential practical applications to highlight the importance of the attack.
It mentions that the computation cost of the method is low. However, it does not discuss the complexity or computation cost theoretically. In experiments, all methods seem to adopt 60 steps optimization. It does not really demonstrate that the computation cost is low. Besides, the QP solver may introduce additional costs. The claim to be more efficient with less computation cost may be inaccurate. It is better to provide more discussions or experiments to show the low cost.
For the baselines, it typically needs multiple steps of binary search and a number of iterations of optimization for each trial of binary search (such as 9x30). But in the paper, the baselines do not perform binary search and the configuration is just like 1x30 or 1x60. It is expected that the baselines without binary search does not perform well. Actually the baseline under 9x30 can achieve 100% ASR for top-5 attack. Comparing with these simpler versions of baselines does not demonstrate the method is really better, since the baselines are not used in their original way as designed. Besides, since all baselines and the proposed method use the same 1x30 or 1x60 configuration, and the proposed method has additional cost with the QP solver, it is hard to claim that the proposed method is more efficient with less computations. Maybe it is better to also demonstrate the results under 9x30 for the baselines to show the performance of the full version, and it is also easier to claim efficiency.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: The presentation can be improved. For example, the second row of $D_T$ in equation 8 seems to be incorrect. The expression in equation 12 does not seem to be correct, which is inconsistent with line 287.
It is better to discuss the potential practical applications to highlight the importance of the attack.
For the baselines, it typically needs multiple steps of binary search and a number of iterations of optimization for each trial of binary search (such as 9x30). But in the paper, the baselines do not perform binary search and the configuration is just like 1x30 or 1x60. It is expected that the baselines without binary search does not perform well. Comparing with these simpler versions of baselines does not demonstrate the method is really better, since the baselines are not used in their original way as designed. Besides, since all baselines and the proposed method use the same 1x30 or 1x60 configuration, and the proposed method has additional cost with the QP solver, it is hard to claim that the proposed method is more efficient with less computations. Maybe it is better to also demonstrate the results under 9x30 for the baselines to show the performance of the full version, and it is also easier to claim efficiency.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: It is better to discuss the potential negative societal impact of this work as it proposes an attack method in deep learning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking your time reviewing our paper. We address your concerns as follows.
**Comment 1:** \"The presentation can be improved.\"
> **Response:** We agree and will carefully revise and proofread the paper.
> Regarding noted issues in our matrix in Eqn. 8, it indeed has too many nonzero items in the second row. The item in row 2 column 4 should be a 0 and the whole matrix should be as follows.
$$
\begin{aligned}
D_T = \begin{bmatrix}
0 & 1 & -1 & 0 & 0; \\
-1 & 0 & 1 & 0 & 0; \\
1 & 0 & 0 & -1 & 0; \\
1 & 0 & 0 & 0 & -1 \\
\end{bmatrix}
\end{aligned}$$
> With respect to a missing term in Eqn. 12, we agree there is a missing bias matrix term. The formulation in line 287 though, is correct. Eqn.12 will be corrected to,
$$D_T\cdot (A\hat{z} + B) > 0 \quad \Rightarrow \quad -D_T\cdot A\hat{z} \leq D_T\cdot B - \eta,$$
**Comment 2:** \"It is better to discuss the potential practical applications to highlight the importance of the attack.\"
> **Response:** Please refer to the "Elaborated Motivations of Learning Ordered Top-$K$ Adversarial Attacks" in our global response. We will carefully discuss them in the revision.
**Comment 3:** $9\times *$ vs $1\times *$ budgets and efficiency concerns.
> **Response:** While we want to emphasize our claims of efficiency are with respect to adversarial budget (total number of model gradient calls), you raise a great point regarding the need to compare our baseline methods in their original configuration. For this reason, we have added $9\times *$ results for $K=5$ and $K=10$ configurations in Table 1 in our global response PDF. Our QuadAttack$K$ achieves much better results consistently.
> Additionally since $1\times *$ configurations can be tuned to trade ASR for lower energy and vice versa, we compute and plot full ASR vs L2 Energy tradeoff curves for two attack settings.
> With respect the computational overhead of our QuadAttac$K$ method due to solve the QP problem at each iteration. Qualitatively, we do observe that our method is slower than baseline methods. As our response to Comment 3 by the reviewers 2efR and xrmU, we acknowledge the limitation. For a precise understanding of runtime, we have profiled our QuadAttac$K$ and the AD attack on ResNet50 and ViT-B.
>> For ResNet-50 we have found on average QuadAttac$K$ performs 2.47 attack iterations per second whereas AD performs 32.02 iterations per second (a factor of 12.96). For ViT-B QuadAtta$K$ performs 2.96 attack iterations per second whereas AD performs 11.86 iterations per second (a factor of 4).
>> We note that as the target model becomes larger, the adversarial loss constitutes a smaller fraction of total runtime thus the ratio tends toward 1. Further, we note the quicker attack iterations of QuadAttac$K$ on ViT-B which indicates our QP solver converges faster on ViT-B attacks. We will discuss the mean runtimes between different methods in revision.
> To address the overhead of our QuadAttac$K$, we will also explore and compare how the QP solver could be adjusted to initialize the QP solver at the previous iteration's solution to nearly eradicate the cost of the QP solver. To address the overhead of our QuadAttac$K$ , we will also explore and compare how the QP solver could be adjusted to initialize the QP solver at the previous iteration's solution to nearly eradicate the cost of the QP solver.
**Comment 4:** \"It is better to discuss the potential negative societal impact of this work as it proposes an attack method in deep learning.\"
> **Response:** On the one hand, we elaborate in the global response some potential scenarios in practice for which the proposed ordered top-$K$ adversarial attacks may be risky if applied. On the other hand, since we focus on clear-box attacks, they are less directly applicable in practice compared to opaque-box attacks, which makes the concern less serious. We will make these clear in the Broader Impact section in revision.
---
Rebuttal Comment 1.1:
Title: discussion
Comment: Thanks for the comment. My concerns are addressed and I changed my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you again for your time and great efforts reviewing our submission. We are glad to learn that our rebuttal addressed your concerns. | Summary: This work proposes a novel approach, QuadAttack to learning ordered top-K adversarial attacks with a low cost. The method is based on a quadratic programming formulation that optimizes the attack objective. Notably, this work extends to a larger K(Top-K). For example, the K is improved from 10 to 15 compared to previous works. QuadAttacK outperforms state-of-the-art methods in terms of attack success rate and query efficiency on various datasets and architectures.
Strengths: 1. This work firstly uses quadratic programming to learn the adversarial attack. The novelty can help this community.
2. The work extends the Top-K attack from Top-10 to Top-15 even larger with a low cost, brining insights to feature works.3
3. When the K=15, the ASR of QuadAttacK is much better than all SOTA methods.
4. Comprehensive results show the generalization of the method.
Weaknesses: 1. QuadAttack is not always better than baseline methods. Some deep analysis is lacked.
2. The related work[1] is not mentioned. It could also be considered as a baseline method.
3. It would be better if some results about efficiency are provided.
[1] Zhang, Chaoning et al. “Investigating Top-k White-Box and Transferable Black-box Attack.” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022): 15064-15073.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts reviewing our submission. We address your concerns as follows.
**Comment 1:** \"QuadAttack is not always better than baseline methods. Some deep analysis is lacked.\"
> **Response:** Thank you for your detailed review. We appreciate the concern raised regarding the comparisons between QuadAttac$K$ and baseline methods (such as AD), particularly in the context of ASR vs Energy tradeoff.
> We first would like to point out that any mentions of *efficiency* in our paper refer precisely to the adversarial compute budget (total number of model backward passes). With that said, consider the $K=5$ and $1\times 30$ DeiT-S configuration results in our original submission (Table 2). Here, our QuadAttac$K$ obtains an ASR of 0.77 with an L2 energy of 3.34, while AD achieves an ASR of 0.34 with an L2 energy of 2.52. This comparison may give the illusion that while QuadAttac$K$ handles more cases, AD appears as to be a lower-energy attack. To disprove this notion, it's crucial to emphasize that the loss weighting of QuadAttac$K$ can be adjusted (so can the baseline methods'). This loss weight allows trading ASR for lower energy (a trade that is typically exponential). Specifically, if we adjust the weight so that QuadAttac$K$'s ASR is 0.34 in this configuration, its L2 energy will be much lower than that of the AD method. For holistic comparisons in terms of the trade-off between ASR and L2 energy, please refer i) to *Table 1 in our global response PDF* which shows our QuadAttac$K$ consistently outperforms the baseline methods, and ii) to *Figures 1 \& 2 in our global response PDF*, which show that our QuadAttac$K$ significantly outperform the AD method.
> Additionally, we expanded our results to include the $9\times *$ configurations. For every attack instance, this configuration performs 9 binary search steps to determine the lowest possible energy for a successful attack. In other words, this configuration reduces the concept of a tradeoff curve to provide more holistic results since the search eliminates the effect of a loss weight choice. This configuration again shows the large margin between QuadAttac$K$ and its baseline methods.
**Comment 2:** \"The related work [1] is not mentioned. It could also be considered as a baseline method.\"
> **Response:** We really appreciate the additional reference suggestion. We will discuss this excellent work by Zhang, Chaoning et al. in revision. In comparisons, we would like to emphasize that \[1\] is not working on the same problem as our QuadAttac$K$. They investigate a different attack setting as discussed in Section 7 (Discussion) of \[1\]. They refer to our baseline work (AD, Zhang and Wu) and explicitly state their Top-$K$ optimization definition is not the same problem. To elaborate, we would like to point out there may exist 3 different kinds of \"Top-$K$\" attacks in the literature as follows.
> - Untargeted Top-$K$ Adversarial Attack (Easiest): Ground truth shouldn't be in the Top-$K$ classes, Top-$K$ classes can be anything
but ground truth.
> - Unordered Top-$K$ Adversarial Attack (Harder): Provides specific target Top-$K$ classes that should be in the Top-$K$ predictions
after the attack but no particular order of appearance is enforced as long as each target class is somewhere in the Top-$K$
predictions.
> - Ordered Top-$K$ Adversarial Attack (Hardest): Provides specific target Top-$K$ classes in order and the Top-$K$ predicted classes
after attack must match this exact order.
> (Zhang, Chaoning et al.) explore the *Untargeted Top-$K$ Adversarial Attack* whereas we focus on the *Ordered Top-$K$ Adversarial Attack*, so we may not be able to straightforwardly compare with their work.
**Comment 3:** \"It would be better if some results about efficiency are provided.\"
> **Response:** Our QuadAttac$K$ method has a computational overhead to solve the QP problem at each iteration. We acknowledge this limitation in our response to Comment 3 by the reviewer 2efR and in our original submission. For a precise understanding of runtime, we have profiled our QuadAttac$K$ and the AD attack on ResNet50 and ViT-B.
>> For ResNet-50 we have found on average QuadAttac$K$ performs 2.47 attack iterations per second whereas AD performs 32.02 iterations per second (a factor of 12.96). For ViT-B QuadAtta$K$ performs 2.96 attack iterations per second whereas AD performs 11.86 iterations per second (a factor of 4).
>> We note that as the target model becomes larger, the adversarial loss constitutes a smaller fraction of total runtime thus the ratio tends toward 1. Further, we note the quicker attack iterations of QuadAttac$K$ on ViT-B which indicates our QP solver converges faster on ViT-B attacks. We will discuss the mean runtimes between different methods in revision.
> To address the overhead of our QuadAttac$K$, we will also explore and compare how the QP solver could be adjusted to initialize the QP solver at the previous iteration's solution to nearly eradicate the cost of the QP solver. | Summary: This paper introduces QuadAttackK, a new approach to compute ordered top-K adversarial attacks. The main contribution of this paper is to formulate and efficiently solve the top-K adversarial attack problem via quadratic programming (QP). The experiment results on ImageNet models show that the proposed method improves the attack success rate for large K while maintaining a cheap budget.
Strengths: - Clean formulation + efficient implementation: Introduces a Quadratic Programming (QP) approach to learn ordered top-k clear-box target attacks. The solver leverages recents methods in constrained optimization within neural networks (e.g., based on OptNet) to get an efficient batched QP implmentation.
- State-of-the-art (SOTA) empirical results. The proposed method obtains SOTA attack success rates on ImageNet models such as DenseNet, ResNet and ViTs. It also enables top-k adversarial attacks with large K + low cost budget.
Weaknesses: - The problem of computing *ordered* top-k adversarial attacks lacks some motivation. The motivation in the paper is either too general ("enabling better controllability in learning attacks that are more difficult to defend, revealing deeper vulnerability of a trained DNN") or vague / confusing ("testing the robustness of an attack method itself, especially when K is relatively large").
- Presentation and writing can be significantly improved (e.g., figures and tables are hard to parse, section 4 does not describe the baselines and evaluation metric clearly)
- As noted in the paper, QuadAttackK is slow compared to baselines (need to solve a QP after every iteration) and the attacks do not transfer well to other models.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: None
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time reviewing our paper. In the following, we address your comments point by point.
**Comment 1:** \"The problem of computing ordered top-k adversarial attacks lacks some motivation.\"
> **Response:** Please refer to the \"Elaborated Motivations of Learning Ordered Top-$K$ Adversarial Attacks\" in our global response. We will carefully discuss them in the revision.
**Comment 2:** \"Presentation and writing can be significantly improved.\"
> **Response:** Thank you for your recommendations regarding paper presentation. We will carefully revise the paper. Regarding noted issues, our matrix in Eqn. 8 indeed has too many nonzero items in the second row. The item in row 2 column 4 should be a $0$ and the whole matrix should be as follows.
$$
D_T = \begin{bmatrix}
0 & 1 & -1 & 0 & 0; \\
-1 & 0 & 1 & 0 & 0; \\
1 & 0 & 0 & -1 & 0 ; \\
1 & 0 & 0 & 0 & -1 \\
\end{bmatrix}
$$
> With respect to a missing term in Eqn. 12, we agree there is a missing
bias matrix term. The formulation in line 287 though, is correct. Eqn.
12 will be corrected to
$$D_T\cdot (A\hat{z} + B) > 0 \quad \Rightarrow \quad -D_T\cdot A\hat{z} \leq D_T\cdot B - \eta,$$
> With respect to clearer descriptions of the baselines and evaluation metric, we will revise the paper to make it self-contained regarding those aspects.
**Comment 3:** \"QuadAttac$K$ is slow compared to baselines (need to solve a QP after every iteration) and the attacks do not transfer well to other models.\"
> **Response:** We acknowledge the limitations raised in terms of the transferability and the QP Solving computational overhead.
> With respect to the transferability of learned attacks, we deliberately chose to focus on the complexity of this optimization problem in the clear-box setting, and on the learnability of ordered top-$K$ attacks, especially for large $K$'s. We notice that learning transferrable clear-box attacks is a challenging problem even for the traditional top-$1$ setting. We leave the study of attack transferability as a future endeavor. One potential starting point could be to investigate the problem of how to apply our QuadAttac$K$ for multiple different networks simultaneously. Another potential direction is to first gain better understanding of the alignment of the latent spaces (the input space to the linear classifier) between different networks, and then to guide the learning of attacks to focus more on those aligned sub spaces.
>With respect to the overhead of solving a QP problem at every iteration, our QuadAttac$K$ method has a computational overhead to solve the QP problem at each iteration. We acknowledge this limitation in our response to other reviewers and in our original submission. For a precise understanding of runtime, we have profiled our QuadAttac$K$ and the AD attack on ResNet50 and ViT-B.
>> For ResNet-50 we have found on average QuadAttac$K$ performs 2.47 attack iterations per second whereas AD performs 32.02 iterations per second (a factor of 12.96). For ViT-B QuadAtta$K$ performs 2.96 attack iterations per second whereas AD performs 11.86 iterations per second (a factor of 4).
>> We note that as the target model becomes larger, the adversarial loss constitutes a smaller fraction of total runtime thus the ratio tends toward 1. Further, we note the quicker attack iterations of QuadAttac$K$ on ViT-B which indicates our QP solver converges faster on ViT-B attacks. We will discuss the mean runtimes between different methods in revision.
> To address the overhead of our QuadAttac$K$, we will also explore and compare how the QP solver could be adjusted to initialize the QP solver at the previous iteration's solution to nearly eradicate the cost of the QP solver.
---
Rebuttal Comment 1.1:
Title: Update
Comment: Thanks for the response. I read the rebuttal and I would like to keep my score as is. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their constructive feedbacks which help us to greatly improve our submission. We first address some common concerns.
**Drastically Improved Results.** Please refer to *Table 1 in our global response PDF* for the improved results. In optimization, perturbations are initialized with some small $\epsilon$ energy white gaussian noise. During the initial steps of optimization, the optimizer takes steps with large increases in perturbation energy since $\epsilon$ happens to be away from many required energies for a successful attack. These large increases in energy induces a momentum in the $AdamW$ optimizer, which makes it difficult to reduce L2 energy in future iterations even if our objective function's gradient points towards a direction with minimal energy. By introducing a small number of warmup steps (e.g., 5, as commonly done in training a network on ImageNet from scratch) after which the optimizer's state is reset, we have managed to improve the performance of all analyzed methods. Our QuadAttac$K$ benefits most.
**Analyses on the Trade-Off Between Attack Success Rates and Attack Energies.** Please refer to *Figures 1 \& 2 in our global response PDF*. These tradeoff curves explore the concept of how a higher success rate may be achieved by choosing to have higher energies and conversely a lower energy may be achieved by choosing to have a lower success rate. Nonetheless, these curves holistically compare the capacity of QuadAttac$K$ against the baseline method.
> Additionally, we expanded our results to include the $9\times \*$ configurations. For every attack instance, this configuration performs 9 binary search steps to determine the lowest possible energy for a successful attack. In other words, this configuration reduces the concept of a tradeoff curve to provide more holistic results since the search minimizes the effect of a loss weight choice. This configuration again shows the large margin between QuadAttac$K$ and its baseline methods.
**Elaborated Motivations of Learning Ordered Top-$K$ Adversarial Attacks:**
These attacks exploit the principle of \"Class Coherence\", recognizing the relationships or logic connecting classes within ordinal or nominal frameworks. Unlike unordered or Top-1 attacks, ordered Top-$K$ attacks can subtly manipulate predictions while maintaining coherence within the expected context.
In an ordinal context like credit ratings (*\[Extremely High Risk, Very High Risk, High Risk, Moderate Risk, Low Risk, Minimal Risk, No
Risk\]*), they can downgrade a rating without disrupting the logical flow, making it less detectable. In a nominal context such as a
recommendation system attacking predicted user interests in shopping classes from *\[Books, Movies, Beauty, Furniture, Fashion\]* into
*\[Fashion, Books, Movies, Beauty, Furniture\]* in order; subtly pushing Fashion onto the user while keeping them engaged with their true
interests.
An unordered change might disrupt these logical groupings, (e.g. push Fashion to the top, but also inadvertently pushing Furniture up as well) leading to a clumsy attack, raising a red flag to a human user, or even trivial detection in a system that validates class coherence. These refined and subtle manipulations can have diverse consequences, from affecting financial decisions to subtly influencing user behavior in online platforms. They are more challenging to detect than other types of attacks, both quantitatively or from a human standpoint, and represent a nuanced and significant threat that warrants deeper exploration in adversarial machine learning. For completeness, we provide more specific examples below.
> Ordinal example: Imagine a cancer risk assessment tool that analyzes 2D medical images like mammograms to categorize patients' cancer risk into the ordinal 7-level risk ratings (as the credit ratings). An oncologist could use this tool to triage patients, prioritizing those in the highest risk categories for immediate intervention. An attacker aiming to delay treatment might use an ordered top-5 adversarial attack to change a prediction for a patient initially assessed as Very High Risk. They could target the classes *\[High Risk, Moderate Risk, Low Risk, Minimal Risk, Very High Risk\]*, subtly downgrading the urgency without breaking the logical sequence of risk categories. An unordered attack, in contrast, might lead to a sequence like *\[Low Risk, Very High Risk, Minimal Risk, Moderate Risk, High Risk\]*, disrupting the ordinal relationship between classes. Such a disruption could raise red flags, making the attack easier to detect.
> Nominal class example: Traffic control systems could use deep learning to optimize flow by adjusting the timing of traffic lights based on the types of vehicles seen. Priority might be given to certain vehicle classes, such as public transit or emergency vehicles, to improve
response times. Imagine a city's traffic control system, which has specific traffic light timing behavior for the nominal vehicle
categories *\[Emergency Vehicle, Public Transit, Commercial Vehicle, Personal Car, Bicycle\]*. Public transit might be given slightly
extended green lights during rush hours to encourage public transportation use. An attacker wanting to cause delays for personal
cars without raising alarms could launch an ordered Top-2 adversarial attack, targeting the sequence *\[Commercial Vehicle, Public
Transit\]*. This would cause the system to interpret most personal cars as commercial vehicles during the attack, applying the extended green light times meant for public transit to lanes primarily used by commercial vehicles. An unordered top-2 attack that may result in
*\[Emergency Vehicle, Commercial Vehicle\]*, would likely be quickly detected, as emergency vehicle priority changes are significant and
could be easily noticed by traffic operators (this weakness is exacerbated in any Top-1 attack or unordered attacks).
Pdf: /pdf/67c488f42787af52bf1982e862c458e77138b8f4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Scalable Membership Inference Attacks via Quantile Regression | Accept (poster) | Summary: This paper introduces a new class of membership inference attacks based on performing quantile regression on the distribution of confidence scores induced by the model under attack on points that are not used in training. The approach is computationally efficient and does not require knowledge of the model's architecture, making it truly "black-box". The paper discusses the efficacy of this approach through extensive experiments on various datasets and model architectures. The experiments show that this approach is competitive with (and sometimes more effective than) much more computationally expensive shadow model approaches. Overall, the paper presents a more scalable and efficient membership inference attack.
Strengths: - interesting topic
- novel method
- well-written paper
Weaknesses: - more evaluation metrics needed
- compare with different attacks
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I appreciate it if the authors could also compare the proposed method to the model-based attacks (https://arxiv.org/abs/1610.05820) and metric-based attacks. (https://arxiv.org/abs/2003.10595)
- Why the proposed method performs worse on C-10 and C-100 but better on IN-1k? I would suggest the authors elaborate more on it.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - Besides accessing the performance, it is also important to report the time/computational cost of different methods to further demonstrate the efficiency of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful reading and useful feedback --- below we address your specific questions. We're happy to further discuss any of these points if there are any remaining questions or confusions!
> *I appreciate it if the authors could also compare the proposed method to the model-based attacks (https://arxiv.org/abs/1610.05820) and metric-based attacks. (https://arxiv.org/abs/2003.10595)*
These are methods that pre-date LIRA, and focus on global metrics of success (rather than performance of the attacks at low FPRs). We choose LiRA as our main point of comparison as it is the state-of-the-art for membership inference when looking at performance on the presented datasets at low FPR. The original LiRA paper directly compares LiRA to the approaches you reference and shows that LiRA outperforms them, which is our basis for choosing LiRA as a state of the art method. Both methods are appropriately cited in our related work section.
> *Why the proposed method performs worse on C-10 and C-100 but better on IN-1k? I would suggest the authors elaborate more on it.*
We discuss this a bit in the paper, but are happy to elaborate further in the next revision. We find that uniformly across all of our experiments, the attack that obtains the lowest pinball loss has the best performance. In the smaller data regimes like C-10, we find that we can extract a quantile regression model from the LiRA shadow models that has lower pinball loss than the regression model we can obtain by directly optimizing for pinball loss (and correspondingly, out-performs). Direct optimization of pinball loss appears to work better in the large data regime, which is also where the computational benefits of our approach are most pronounced.
> *Besides accessing the performance, it is also important to report the time/computational cost of different methods to further demonstrate the efficiency of the proposed method.*
We will add final clock times to the paper in our next revision. For reference, the base model ImageNet-1k took 18 hours to train using our setup, while the quantile MIA attack took 40 hours on the same server including hyper-parameter tuning; roughly the cost of $2.2$ shadow models. On our recently added CINIC10 results, it took 85 minutes to train the base network (and each subsequent shadow model and 4 hours 40 minutes for hyper-parameter tuning; roughly the cost of $3.3$ shadow models with a performance comparable to 4 shadow models. The time cost for a single quantile regression trial was 18 minutes. On our tabular examples, we perform hyper-parameter tuning on both the base and target model and thus our approach has the same compute cost as a single shadow model.
---
Rebuttal Comment 1.1:
Comment: The authors address my concerns, and I would be happy to raise my score.
---
Reply to Comment 1.1.1:
Comment: Thanks! We appreciate it. | Summary: This work presents a novel membership inference attack that offers computational efficiency compared to state-of-the-art (SOTA) approaches. While the paper addresses a topical issue and provides an alternative attack method, there are several weaknesses that need to be addressed for a more comprehensive and convincing study, including the lack of evaluation of TPR@lowFPR, concerning empirical results, insufficient support for computational efficiency claims, and a lack of empirical evidence for the theoretical results. Addressing these weaknesses is essential to enhance the paper's credibility and provide a more comprehensive evaluation of the proposed attack method. Finally, I would be happy to increase my score if the authors could alleviate some of my concerns.
Strengths: **Topical and relevant**:
The paper addresses a relevant topic in the field of membership inference attacks, which is of interest to conference readers. The proposed attack method introduces a novel approach that offers computational efficiency compared to existing SOTA methods.
**Competitive performance without training multiple shadow models**:
The authors suggest an attack method that competes with state-of-the-art approaches while being computationally more efficient. By eliminating the need to train several shadow models, their method offers a practical advantage, which is a significant contribution to the field.
Weaknesses: Regarding the theoretical results:
**Lack of TPR@lowFPR evaluation**: The theoretical results presented in the paper focus on controlling the false positive rate (FPR). However, what truly matters in membership inference attacks (see [1]) is the true positive rate (TPR) at low FPR. The paper lacks analysis of TPR@lowFPR, which is crucial for assessing the attack's effectiveness. (This recent paper [2] could help the authors with this kind of analysis).
Regarding the empirical results:
**Concerning empirical results**
- The quantile attack outperforms the Likelihood Ratio Test-based Inference Attack (LIRA) in terms of TPR@lowFPR on ImageNet. This is surprising, as the Neyman-Pearson Lemma suggests that the LRT is the most powerful test. The discrepancy may be due to the limited computational resources used in training shadow models or differences in training setups when training the underlying imagenet classifier as batch size, # epochs, etc impact the succeptibility to membership inference attacks (e.g., see [1]).
- The replicated LIRA attacks with 8 and 4 shadow models outperform the read-off LIRA attack with 64 shadow models [2]. This observation questions the authors' conclusion that their attack is more favorable. It suggests that if the authors had trained 64 shadow models on ImageNet, the LIRA attack would perform better, as indicated by the evaluation on tabular datasets.
**Computational intensity claims**: The authors argue that their method is computationally more efficient by eliminating the need for training shadow models. However, there is no empirical evidence supporting this claim. It is crucial to provide comparative plots showing the tradeoff between computational cost and TPR@fixed FPR to substantiate this argument.
**Lack of empirical support for theoretical results**: The theoretical results regarding FPR control lack empirical evaluation. Conducting simulation studies or additional experiments would provide empirical evidence to support these claims.
-----
**References**
[1] Membership Inference Attacks From First Principles, https://arxiv.org/abs/2112.03570
[2] Gaussian Membership Inference Privacy, https://arxiv.org/abs/2306.07273
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Could the authors please elaborate on why the read-off attack results from [2] that use 64 shadow models is performing worse than the authors' implemented attacks with 4 and 8 shadow models? There must be some discrepancies in the underlying models that have been trained!?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: This point should be ok.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful reading and useful feedback --- below we address your specific questions. We're happy to further discuss any of these points if there are any remaining questions or confusions!
> *The theoretical results presented in the paper focus on controlling the false positive rate (FPR). However, what truly matters in membership inference attacks (see [1]) is the true positive rate (TPR) at low FPR. The paper lacks analysis of TPR@lowFPR, which is crucial for assessing the attack's effectiveness. (This recent paper [2] could help the authors with this kind of analysis).*
Our theory concerns the false positive rate of our attacks, which is something that can be established in a model-agnostic way (i.e. without making any assumptions about the model that we are attacking). It is not possible to prove similar theorems about the attack having high true positive rates at low false positive rates, for the simple reason that not all models are vulnerable to membership inference attacks. For example, models that are trained in a differentially private way for small values of $\epsilon$ have the provable property that no membership inference attack with low false positive rate can have a high true positive rate. Thus, as with much work on hypothesis testing and membership inference, we prove one-sided theoretical guarantees, and then evaluate the tradeoff between false positive rates and true positive rates empirically. We note that our empirical analyses are indeed focused on the low false positive rate regime: e.g. the false positive rate axis in figure 1 is on a log scale, and goes as low as $10^{-4}$, and tables in our submission present precision values at low false positive rates.
> *The quantile attack outperforms the Likelihood Ratio Test-based Inference Attack (LIRA) in terms of TPR@lowFPR on ImageNet....*
Neyman-Pearson implies that thresholding likelihood ratios gives an optimal test _under the assumption that the likelihood ratios are known exactly_. However, in the context of membership inference attacks, these likelihood ratios are never known, as they arise from a complex empirical process. LiRA uses a simple parametric family of models to attempt to approximate the likelihoods from samples, but this approach comes with no guarantees, as in general the true likelihood ratios need not be well approximated by anything in the parametric class.
Under the assumption that the likelihood ratios are monotonic in the test statistic (which is true for the parametric family of distributions used by LiRA), a likelihood ratio test also corresponds to a thresholding of the test statistic (which is what our attack does). Thus both our attack and LiRA can be viewed as consistent with the structure of an optimal hypothesis test by Neyman Pearson: we differ in what properties about the test statistic distribution we choose to estimate, and whether we perform those estimates parametrically or non parametrically. We can elaborate on this in the revision.
> *The replicated LIRA attacks with 8 and 4 shadow models outperform the read-off LIRA attack with 64 shadow models [2]. This observation questions the authors' conclusion that their attack is more favorable. It suggests that if the authors had trained 64 shadow models on ImageNet, the LIRA attack would perform better, as indicated by the evaluation on tabular datasets.*
The description in LiRA in the original paper is not completely specified, so we did our best to implement the described method; our implementation outperformed their stated results. Unfortunately, we didn't have the computational power to replicate their 64-model results (which is part of the motivation for our work, which is about reducing computational costs), but did provide their stated results for comparison. We expect that were we able to run it, our implementation of LIRA with 64 models would out-perform our implementation with 8 models, but it is not clear doing so would outperform our attack since there are diminishing marginal returns to using more models.
> *Computational intensity claims: The authors argue that their method is computationally more efficient by eliminating the need for training shadow models. However, there is no empirical evidence supporting this claim. It is crucial to provide comparative plots showing the tradeoff between computational cost and TPR@fixed FPR to substantiate this argument.*
We will add final clock times to the paper in our next revision. For reference, the base model ImageNet-1k took 18 hours to train using our setup, while the quantile MIA attack took 40 hours on the same server including hyper-parameter tuning; roughly the cost of $2.2$ shadow models; each individual quantile regression trial took 75 minutes. On our recently added CINIC10 results, it took 85 minutes to train the base network (and each subsequent shadow model) and 4 hours 40 minutes for hyper-parameter tuning; roughly the cost of $3.3$ shadow models with a performance comparable to 4 shadow models. The time cost for a single quantile regression trial was 18 minutes On our tabular examples, we perform hyper-parameter tuning on both the base and target model and thus our approach has the same compute cost as a single shadow model.
It is worth noting that the performance of LiRA is rather sensitive to the architecture choice and the data augmentation strategies, which all are considered as hyperparameters. In the scenarios where only API access to the model is available to the attacker, then our algorithm is favorable since ours doesn't rely on any information regarding the architecture, the training process, or the augmentation strategies of the underlying model, whilst LiRA would suffer.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for your response and for your clarifications. I still think that the results on FPR control are not really interesting and that they do not add much insight to the paper. As the authors have partly addressed my concern, I increase my score accordingly under the assumption that the authors will include i) computational resource stats (compute + clock time as the Carlini et al attack can be parallelized and then clock time is a less reasonable measure) to all their existing results and that ii) Figure 1 from the main paper will be modified so that the readout results with $n=64$ shadow models from Carlini et al are not featured in the Figure as they seem highly misleading (authors admited that 'the description in LiRA in the original paper is not completely specified' and so comparing authors' results to those by Carlini et al in this fashion is likely to be incorrect and misleading).
---
Reply to Comment 1.1.1:
Comment: A sincere thanks for your response --- we really appreciate the engagement.
We will update the subsequent draft with compute times, though we do highlight that we used an Async Hyper band [1] scheduler and HyperOpt Search [2] which enables parallelization of the search procedure. We will remove the 64 model readout from the figure to avoid potential confusion.
[1] Li, L., Jamieson, K., Rostamizadeh, A., Gonina, E., Ben-Tzur, J., Hardt, M., ... & Talwalkar, A. (2020). A system for massively parallel hyperparameter tuning. Proceedings of Machine Learning and Systems, 2, 230-246.
[2] Bergstra, J., Yamins, D., Cox, D. D. (2013) Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures. TProc. of the 30th International Conference on Machine Learning (ICML 2013), June 2013, pp. I-115 to I-23. | Summary: The paper proposes a new membership inference attack based on training an attack model with the pinball loss. This avoids the use of shadow models, a common technique for membership inference, while often outperforming attacks which do use shadow models. They evaluate their attacks on a variety of image and tabular data, and show it is competitive with or outperforms the state of the art LiRA attack on these datasets.
Strengths: The proposed approach only requires the training time of a single shadow model.
The proposed attack often reaches very high precision at low false positive rates. Precision is often competitive with attacks that require many shadow models.
The evaluation considers several datasets, including both image and tabular data.
The paper is easy to read.
Weaknesses: Some magic happens in the experiment section in the paragraph starting at line 290 (page 7). Here, the authors appear to introduce a different objective for their attack model, to directly learn a sample’s mean and deviation. I would like to see some ablation of the choices made in this paragraph.
The proposed attack has a higher “online” computation cost than the offline variant of LiRA. LiRA can train shadow models independently of the target model, while the quantile attack requires the target model to train its attack model.
From plots, it looks like the attack performs poorly at higher false positive rates. In settings where attack accuracy or attacker advantage is preferred to TPR and FPR, this attack might end up worse than existing attacks.
Results are only reported for a single target model, rather than averaging over multiple target models.
The attack model requires hyperparameter tuning, which can be amortized over multiple shadow models when running LiRA.
Small comment: The definition of precision should have TPR in the numerator
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is the pinball loss used at all when attacking the CIFAR models? If it is, could you try to explain in more detail how it is used?
Do you have thoughts on why the attacks start to perform worse than other attacks at higher FPR?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not discuss limitations. It has minor overclaiming in some places, such as when saying the attack requires only a single model, when all results are presented after hyperparameter tuning. Another place is in claiming existing shadow model attacks require knowledge of the target model architecture, which is untrue — the LiRA paper evaluates with different architectures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful reading and useful feedback --- below we address your specific questions. We're happy to further discuss any of these points if there are any remaining questions or confusions!
> *Some magic happens in the experiment section in the paragraph starting at line 290 (page 7). Here, the authors appear to introduce a different objective for their attack model, to directly learn a sample’s mean and deviation. I would like to see some ablation of the choices made in this paragraph.*
Apologies for any confusion! The general method is the same: we solve a quantile regression problem with the goal of minimizing pinball loss. We find that for smaller datasets, and simpler tasks, doing parametric quantile regression (by fitting a parametric model, rather than non-parametrically minimizing pinball loss), leads to lower pinball loss than our direct minimization used for larger datasets. This may have to do with generalization issues in these more data scarce regimes. Thanks for flagging the confusion --- we will elaborate in the next revision.
> *The proposed attack has a higher “online” computation cost than the offline variant of LiRA...*
It is true that to train our quantile regression model, we first need to evaluate the model we are going to attack on the points in a validation set, whereas the offline variant of LiRA does not need this evaluation step before training its shadow models. We will mention this in the next revision --- but note that this is a low-order cost compared to training LiRA's shadow models (and our own quantile regression training): Each evaluation of the model under attack requires a single forward pass. On the other hand, training a model of the same architecture requires multiple forward and backward passes per data point.
> *From plots, it looks like the attack performs poorly at higher false positive rates. In settings where attack accuracy or attacker advantage is preferred to TPR and FPR, this attack might end up worse than existing attacks.*
We follow the same principle in Carlini 2022 by focusing on TPR at low FPR and agree with their point that attacker advantage is a poor indicator of performance at low FPR. For reference, see the accompanying pdf with results on the CINIC10 dataset which includes attacker advantage; all tested methods perform similarly on this metric.
> *Results are only reported for a single target model, rather than averaging over multiple target models.*
Agreed that it would be useful to show this method works well over the randomness of SGD, if that is what you mean! We're happy to show averages/coverage intervals of this approach over that randomness in future drafts.
> *Is the pinball loss used at all when attacking the CIFAR models? If it is, could you try to explain in more detail how it is used?*
We found that using empirical pinball loss in hyperparameter tuning for CIFAR attacks was effective. As mentioned above, we achieve lower test pinball loss for small data/model regimes by using another objective function. Why this takes place is something we definitely hope to explore further.
In all of our experiments, our goal is to find a quantile regression model that minimizes pinball loss. On ImageNet, we found that the most effective way to do this was to directly minimize pinball loss "non-parametrically" -- i.e. without trying to fit a parametric probability distribution to the scores. To do this, we create multiple outputs for a single quantile regression model, each with its own target quantile $\alpha$ (e.g. logarithmically spaced $\alpha$ values) and the entire network is trained to minimize pinball loss simultaneously across all outputs.
For the smaller datasets like CIFAR, CINIC10, and OpenML tabular datasets, we are able to get lower test pinball loss by instead fitting a parametric Gaussian model to the score distributions;
the model computes negative log-likelihood (NLL) of the score under the Gaussian distribution defined by the predicted mean and variance pair given a sample, and the objective function of the minimization problem is the averaged NLL over samples. After learning, based on the mean and the variance produced by the model given a sample under attack, quantiles with specific $\alpha$ values are computed from the fit Gaussian distribution.
Why parametric quantile regression methods outperform direct pinball loss minimization (as measured by test pinball loss) on smaller datasets is something we hope to explore further.
> *Do you have thoughts on why the attacks start to perform worse than other attacks at higher FPR?*
We don't know; non-parametric quantile regression via pinball loss minimization appears to work less well (in our setting at least) for lower target quantiles. But we believe that the low false positive rate regime is the most interesting/important from the point of view of an attacker.
> *The paper does not discuss limitations. It has minor overclaiming in some places, such as when saying the attack requires only a single model, when all results are presented after hyperparameter tuning. Another place is in claiming existing shadow model attacks require knowledge of the target model architecture, which is untrue — the LiRA paper evaluates with different architectures.*
Thanks for the comments: we'll look out for instances of overclaiming and try and dial them back. We can clarify that our attack only requires a single regression model trained on a holdout $S$, though performance of the attack can be optimized using hyperparameter tuning which involves training several models and using the best one.
We also highlight that difference in architectures is a sensitive parameter for shadow model attacks, as evidenced in the LiRA paper on Fig 11. Our approach doesn’t require knowledge of the architecture, since the quantile regression is essentially a different task than the original classification task.
---
Rebuttal Comment 1.1:
Title: thank you for the reply
Comment: I'm happy to keep my score!
Re single target model: Generally when evaluating LiRA, it's common to train multiple models on different subsets of the training data, and verify that MI is successful for all of these. So this should capture more randomness than just SGD randomness.
Re different ways to get low pinball loss: I think it's interesting that you may need quite different approaches to get small pinball loss on different datasets. This could be a limitation in practice, though, since this could end up with training a bunch of different models to see what works best.
Re Fig 11 in LiRA paper: My reading of this figure is that the shadow model architecture (11a) doesn't matter that much, but I suppose that's open to interpretation.
---
Reply to Comment 1.1.1:
Comment: A sincere thanks for your response --- we really appreciate the engagement.
We will update the draft in subsequent revisions by carrying out our attack on multiple data splits.
Regarding the difficulty of directly optimizing pinball loss in small datasets. Smaller dataset sizes, task difficulty, and model expressiveness could all play an issue. So far our experiments suggests that parametric approaches are very likely to be the most successful option for this setting and we plan to investigate more on this direction. | Summary: The main focus of this paper is about the membership inference attack problem, i.e., determining whether a particular example was used in training or not. Most existing such attacks estimate the distribution of some test statistics, which are usually computationally expensive. In contrast to existing approaches, this paper proposes to perform quantile regression on the distribution of confidence scores induced by the model under attack on points that are not used in training. For implementation, the authors first collect a dataset known to be never used in training. Given a well trained model $f$, the corresponding confidence scores on collected data can be obtained. Thereafter, a quantile regression model $q$ is trained to predict the target quantile of the data-label pair.
Strengths: Reducing many (around hundred) shadow models to a single one is interesting.
Experiment studies on different image datasets, including CIFAR-10, CIFAR-100, ImageNet-1K.
Weaknesses: -- Paper writing still needs some improvements. It requires efforts to follow.
-- The parameter $\alpha$ is important yet hard to adjust manually.
-- Currently, I have not observed any significant influence of the ongoing work on industry-level applications. Despite this, it is important to note that through additional analysis and exploration, there may be a possibility of uncovering substantial potential for impact in the future. As of now, the ongoing work does not appear to have made any noticeable waves in the industry. Nevertheless, by delving deeper into its intricacies and exploring its possibilities, we might unveil its capacity to bring about significant changes that could shape the industry landscape in the coming years.
-- The authors fail to provide a comprehensive explanation of the significance of membership inference attacks (MIAs). It remains unclear why it is crucial to prevent the disclosure of specific data points that were used in training a model. If we assume that we have already inferred the utilization of certain data during the training process, what kind of significant problems could arise as a result? While consulting the references cited in the submission may shed some light on these matters, the paper itself should strive to be self-contained and inclusive, providing a thorough understanding of the subject matter without relying heavily on external sources.
-- In order to facilitate better comprehension, it is necessary to reorganize the related work more effectively. By improving the organization of the related work, we can enhance its accessibility and ensure that readers can easily grasp its significance. A well-structured presentation of the related work will contribute to a clearer understanding of the subject matter.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: All the current MIAs, including this submission, are based on the observation that models tend to overfit their training sets. Will it still be true for current big models trained on big data? My understanding is that the assumption under MIAs is mainly caused by the generalisation ability of most models.
Will the conclusion still be true in other domains, e.g., natural language and audio?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Please refer to the weaknesses shown above.
Moreover, the contribution of this work appears to be rather limited in scope. To put it simply, the essence of this study involves training a regression model using only a subset of the available training data. It therefore makes that the significance and potential impact of this work may be somewhat restricted.
It would be better to show experiments on other image models, such as based on VIT, SwinT, BEIT, image encoder in CLIP, .etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful reading and useful feedback --- below we address your specific questions. We're happy to further discuss any of these points if there are any remaining questions or confusions!
> *The parameter $\alpha$ is important yet hard to adjust manually.*
$\alpha$ is the desired false positive rate of the attack, not a hyper-parameter that needs to be optimized. Pinball loss minimization with target quantile $1-\alpha$ explicitly produces a quantile regression model with false positive rate $\alpha$. Moreover, if we want an ensemble of attacks with multiple false positive rates, this doesn't require learning multiple quantile regression models from scratch. It can be efficiently done via multi-task learning with a shared neural network representation (and a separate ``head`` layer per target quantile).
> *Currently, I have not observed any significant influence of the ongoing work on industry-level applications. Despite this, it is important to note that through additional analysis and exploration, there may be a possibility of uncovering substantial potential for impact in the future. As of now, the ongoing work does not appear to have made any noticeable waves in the industry. Nevertheless, by delving deeper into its intricacies and exploring its possibilities, we might unveil its capacity to bring about significant changes that could shape the industry landscape in the coming years.... -- The authors fail to provide a comprehensive explanation of the significance of membership inference attacks (MIAs). It remains unclear why it is crucial to prevent the disclosure of specific data points that were used in training a model. If we assume that we have already inferred the utilization of certain data during the training process, what kind of significant problems could arise as a result? While consulting the references cited in the submission may shed some light on these matters, the paper itself should strive to be self-contained and inclusive, providing a thorough understanding of the subject matter without relying heavily on external sources*
Thanks for the suggestion -- we are happy to devote more space towards motivating membership inference attacks. There are several different kinds of ''privacy attacks'' on trained models, and membership inference attacks are the simplest. There are broadly two main reasons to be interested in membership inference attacks:
First, membership inference attacks are building blocks that are used to launch stronger kinds of attacks, like data extraction attacks, which extract training data from the models given API access --- see e.g. Extracting Training Data from Diffusion Models, Carlini et al. 2023. Improvements in membership inference lead to improvements across the entire stack of attacks based on membership inference.
Second, the guarantee of differential privacy (a strong notion of privacy that has been adopted by companies including Apple, Google, and Microsoft) is exactly that membership inference attacks can have True Positive/False positive rate curves that lie boundedly above the random guessing baseline (the diagonal on our plots). See e.g. the paper ''[Gaussian Differential Privacy](https://rss.org.uk/RSS/media/Training-and-events/Events/2020/Dong-et-al-jrssb-final.pdf)'' by Dong, Roth, and Su for an overview of the hypothesis testing view of differential privacy. So launching a successful membership inference attack falsifies a differential privacy guarantee, and is thus a form of privacy auditing that is gaining attention in industry --- see e.g. ''[Privacy Auditing with One (1) Training Run](https://arxiv.org/abs/2305.08846)'' by Steinke et al. Improving the scalability of membership inference attacks (as our paper does) makes this form of privacy auditing more tractable.
> *All the current MIAs, including this submission, are based on the observation that models tend to overfit their training sets. Will it still be true for current big models trained on big data? My understanding is that the assumption under MIAs is mainly caused by the generalisation ability of most models.*
Yes: The trend we see in this work is that our attacks work better on larger training sets and larger models. This suggests that these issues may get worse, not better, in larger data and larger model regimes. Other work [2] has shown that MIAs are effective on even larger datasets and architectures than those we used here (i.e. large language models), suggesting that membership inference attacks continue to be problematic in these regimes.
[2] Extracting Training Data from Large Language Models, Carlini et al. 2021
> *Will the conclusion still be true in other domains, e.g., natural language and audio?*
There is no reason a priori our approach would not work on these other domains, though the relevant tasks there are generally not classification. Carlini et al. 2021 and Carlini et al. 2022 both have demonstrated that it was entirely possible to attack large language models, and it is reasonable to assume that our attack would also be effective there.
> *Moreover, the contribution of this work appears to be rather limited in scope. To put it simply, the essence of this study involves training a regression model using only a subset of the available training data. It therefore makes that the significance and potential impact of this work may be somewhat restricted.*
We would like to clarify our regression model is trained on **validation** data, not training data, meaning this attack does not require that the attackers have access to any training data: the attack can be launched by anyone with API access to the model.
---
Rebuttal Comment 1.1:
Comment: Thanks the author for answering my questions. Given most of them have been addressed properly, I am willing to raise my score, although I still cannot figure out significant impact of this work towards industry applications. | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful reading and useful feedback. In the attached pdf we include an additional experiment over the CINIC10 dataset to address some of the reviewers' specific concerns. We're happy to further discuss any of these points if there are any remaining questions or confusions!
Pdf: /pdf/5d3ed9732275268f38bd73e9d30dfa61b55d5a0e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a novel class of membership inference attacks based on quantile regression applied to confidence score distributions. The proposed 'black-box' algorithm does not require knowledge of the model's architecture and performs competitively with state-of-the-art shadow model attacks while being computationally more efficient. The paper presents several experiments showcasing the effectiveness of the approach across various datasets.
Strengths: 1. The paper is well-written, with a clear and understandable motivation.
2. The technical aspects are solid and coherent.
Weaknesses: 1. In my opinion, the main contribution of this paper, which involves using the quantile regression model q to predict the quantiles of the score s(x,y), is not a novel idea within quantile regression. [a] has previously utilized pinball loss in conformal prediction for estimating quantiles of output. Combining quantile regression with membership inference might not present a significant technical challenge and may seem trivial.
2. The claim that the method does not require any knowledge of the model's architecture appears overstated. The method's performance heavily depends on the quantile regression model q. If q is not well-trained, the method will fail. The paper only employs the ConvNext-Tiny model in the main paper, and the sensitivity to different model architectures remains unknown.
3. I find the theoretical part confusing. In my understanding, Theorems 1 and 2 do not establish the theoretical superiority of the proposed method compared to the baseline method using constant quantiles. The paper introduces group validity, which has been extensively studied in the papers mentioned in Line 253, resulting in a lack of theoretical novelty. Additionally, even if the quantile regression model q is learned from a richer set of models, there is no concept of a 'regression model q' for the baseline method. Therefore, it is unclear how the authors concluded that the proposed method outperforms the baseline attack. It seems more intuitive to conclude that a richer model complexity for q may benefit the proposed method.
[a] Romano et al., Conformalized Quantile Regression, NeurIPS 2019
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Could you provide a sensitivity study regarding the architecture of q? Is it an important factor? Will the method perform better when q uses the same architecture as f? Will it lead to more accurate quantile estimation?
2. I mentioned my concerns about the theoretical part above.
3. The experimental results do not perform as well on small datasets. While the computation of LIRA when n=2 seems acceptable and achieves much better performance than your method, could you explain the observed gap?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: I did not find the potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful reading and useful feedback — below we address your specific questions. We’re happy to further discuss any of these points if there are any remaining questions or confusions!
> *In my opinion, the main contribution of this paper, which involves using the quantile regression model q to predict the quantiles of the score s(x,y), is not a novel idea within quantile regression...*
Quantile regression is a classical statistical method for estimating the quantiles of a score distribution. It has been applied in many domains over the course of decades, including recently within conformal prediction, as noted by the reviewer. We do not claim to be developing novel quantile regression techniques; we use quantile regression as a tool to develop a new, scalable membership inference attack, which gives substantial improvements over the state-of-the-art on large datasets.
> *The claim that the method does not require any knowledge of the model's architecture appears overstated...*
We agree that this point can be elaborated on in our paper to give further substantiation. We note at the outset that the results we report use the same quantile regression architecture to attack vision models of different architectures; in contrast the results we report for shadow models use custom architectures for each model under attack. Here are the primary rationales for our claim.
- From a theoretical perspective, because pinball loss minimization is a non-parameteric method for quantile regression, the regression function $q$ need not be from a class containing the target model (in fact, they can be completely unrelated classes). Of course the model class needs to be sufficiently expressive to obtain low loss, but this is qualitatively different than the use of shadow models, which are used to sample from the same distribution on scores as the original model did.
- Our experiments found that large image datasets and a variety of architectures were attackable with convolutional architectures like ConvNext or ResNet; smaller, tabular datasets and models were attackable with gradient-boosting trees.
As an example of architecture robustness, we were able to achieve $76.67\%$ and $ 85.46\%$ precision at $1\%$ and $0.1\%$ FPR, respectively, attacking a ResNet-50 model with a ResNet-50 quantile regression model on the CINIC10 dataset. This performance is roughly comparable to $4$ shadow models in the same conditions ($78.71\%$ and $86.99\%$ Precision respectively) with perfect knowledge of the target model's training parameters. These results are shown in more detail in the accompanying pdf.
> *I find the theoretical part confusing. In my understanding, Theorems 1 and 2 do not establish the theoretical superiority of the proposed method compared to the baseline method using constant quantiles...*
Although this is not how Yeom et al. describe their method, if one wants to view our attack as a generalization of Yeom et al, one can view their attack as solving a pinball loss minimization problem over the class of constant threshold models. This is because, over the class of constant functions, pinball loss is minimized at the threshold that corresponds to the target quantile of the score distribution. In this framing, our attack is a generalization in that it solves a quantile regression problem over a strictly larger model class. As we optimize over richer model classes, we get strictly stronger guarantees---the most obvious are that we find models that have lower pinball loss. As we highlight in the theory section of the paper, our guarantees are also stronger in the sense that we provide group conditional coverage--that is, the false-positive rate is no more than a target level for a large collection of subgroups. In contrast, Yeom et al. only provide a marginal guarantee that ensures a target false-positive rate over the entire distribution. Of course, the main demonstration of the superiority of our attack is given in the empirical results on large datasets.
> *Could you provide a sensitivity study regarding the architecture of q?...*
Our empirical results do not use the same architectures for the models we attack and our quantile regression functions. We find, empirically, that the ability of the model class class to minimize pinball loss is an excellent predictor of how well the attack will work. Just as with other learning tasks, more expressive classes will perform better if there is sufficient training data.
> *The experimental results do not perform as well on small datasets...*
For small datasets (and small models), you are correct that LIRA's performs quite well. As we discuss in the paper, we find that in these cases, we can extract a quantile predictor from LIRA's shadow models that has lower pinball loss than the one we train directly, and so the gap appears to be due to the difficulty of directly optimizing for pinball loss in relatively data poor settings. We note that for relatively small datasets and models, shadow model attacks like LIRA are not extremely expensive to run: the running time benefits of our approach are most apparent for learning problems corresponding to large datasets and models, which fortunately is also the setting in which our attack performs the best. This is further supported by our additional experiments on CINIC10, a generalization of the CIFAR10 dataset, where we achieve $76.67\%$ and $ 85.46\%$ precision at $1\%$ and $0.1\%$ FPR respectively against a ResNet-50 model with a ResNet-50 quantile regression model. This performance is roughly comparable to $4$ shadow models in the same conditions ($78.71\%$ and $86.99\%$ precision respectively). The computational cost of our attack, including extensive hyper-parameter tuning, is roughly $80\%$ that of the comparable shadow model attack. See the attached pdf for additional details on these results.
---
Rebuttal Comment 1.1:
Title: Thanks for the response, I am keeping my score
Comment: Thank you for your response. However, I'm not inclined to raise my score as my concerns remain unresolved. Although this paper doesn't center on conformal prediction, I've found that the techniques it presents align closely with the conformal prediction research line, both technically and theoretically, yielding limited novelty.
1. While I acknowledge that the paper doesn't introduce novel quantile regression techniques, I still perceive the amalgamation of quantile regression and the membership inference attack in this paper as straightforward. Creating a quantile regression model is fundamental in conformal prediction. Blending it with a new context doesn't inherently yield high novelty; further customization of techniques with the membership inference attack is needed.
2. In my view, group-conditional guarantees [1-3] have already received extensive exploration. Particularly, [1] delves into adversarial settings. The theoretical outcomes lack novelty and do not appear as solid as previous works. The theorem essentially merges the membership inference attack setting with group-conditional guarantees.
3. I understand that the original model and quantile regression model can belong to entirely unrelated classes. My point is that the lack of exploration into the model complexity of the quantile regression model is not justified. As you mentioned in the rebuttal, "large image datasets use ConvNext or ResNet; smaller, tabular datasets and models were attackable with gradient-boosting trees." What underlies this distinction? A sensitivity study is notably absent in both the main paper and rebuttal. The experimental results are heavily contingent on the suitable structure of the quantile regression model, demanding a strong inductive bias for its selection.
4. The concern about low performance on small datasets remains unresolved. LIRA's shadow models exhibit lower pinball loss than the one trained directly. What if a different quantile regression model were employed? This aligns with the issue raised in point 3.
Given the reasons above, I'm inclined to recommend rejection for the current version of the paper.
[1] Practical Adversarial Multivalid Conformal Prediction, NIPS 2022\
[2] Batch Multivalid Conformal Prediction, ICLR 2023\
[3] Online Multivalid Learning: Means, Moments, and Prediction Intervals, ITCS 2022 | Summary: This paper studies the question of membership inference attack (MIA), which can be formalized as a hypothesis testing (HT) problem.
The main contribution of this paper is introducing a new class of MIA.
The authors claim that the proposed method is competitive with SOTA MIA methods while being more computationally efficient.
The authors explain the motivation behind their method and theoretically prove the method's effectiveness (in terms of false positive rate).
This paper also conducts experiments on various datasets to evaluate their method.
Strengths: 1. This paper explains the relationship between MIA and HT, which is friendly to readers unfamiliar with MIA.
2. This paper compares the proposed method with those methods in Yeom et al [2018] and Carlini et al [2022]. The proposed methods seem to be a direct generalization of Yeom et al [2018].
Weaknesses: My major concerns lie in the theoretical part of this paper:
1. Theorems 1 and 2 are established at the population level. The approximation error of $q$ is not discussed.
2. This paper does not include any non-asymptotic results.
2. The theoretical part of this paper only compares the proposed method with the baseline attack (Yeom et al [2018]). The difference between the proposed method and Carlini et al [2022] is not discussed.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. It seems unintuitive that the proposed method outperforms Carlini et al [2022]. Could you please explain this phenomenon?
2. Theorem 1 assumes that $\mathcal{H}$ is closed under additive shifts. Are there any examples of such hypothesis classes?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful reading and useful feedback — below we address your specific questions. We’re happy to further discuss any of these points if there are any remaining questions or confusion!
> *Theorems 1 and 2 are established at the population level. The approximation error of is not discussed.*
The reviewer is correct that our theorems are stated at the population level/without approximation error. That said, under the assumption that the learning of $q$ is done from some family of functions with bounded fat-shattering dimension (or other analogous quantity), standard sample complexity and uniform convergence results will imply guarantees in the finite sample setting. The reviewer is correct to note that to get worst-case theoretical bounds from finite data, one would want generalization theorems: but since the learning portion of our attack is simply solving a bounded, Lipschitz, convex optimization problem, generalization issues (and their solutions) are standard and not the focus of our work. We never rely on our theorems when evaluating our method: all results are empirical, and computed on a holdout set. The theorems are meant to give guiding intuition. We are happy to include a discussion of these points and a pointer to textbook generalization theorems for convex optimization.
> *The theoretical part of this paper only compares the proposed method with the baseline attack (Yeom et al [2018]). The difference between the proposed method and Carlini et al [2022] is not discussed.*
The baseline attack and our method both have guarantees that hold in the worst case, without making parametric assumptions. Carlini et al. [2022] do not have comparable theoretical results. Their approach estimates score distributions parametrically, so any comparable guarantees would require making corresponding parametric assumptions about the distribution of scores over the randomness of the dataset and model training. We can add a discussion of this point in the revision.
> *It seems unintuitive that the proposed method outperforms Carlini et al [2022]. Could you please explain this phenomenon?*
There are two primary reasons that our approach outperforms Carlini et al. **in some scenarios**.
The first, as mentioned above, is that their method fits a simple parametric model to the scores sampled from the shadow model, and to the extent that their parametric model is ill-specified, the success of their approach will degrade; in contrast our quantile regression method (when implemented via pinball loss minimization) is non-parametric.
Second, we formulate our hypothesis testing problem differently from Carlini et al., in a way that better captures the attack scenario. The hypothesis testing formulation in Carlini et al. operates on distributions of shadow models that are generated by running the training algorithm on random input datasets that purposefully include or not include the target attack example.
Thus their hypothesis test is designed to attack a random model sampled from some specified training distribution, rather than the particular (realized) model that the membership inference attack is being launched on. In contrast, in the way we cast the hypothesis testing problem, the model under attack is fixed, and the randomness is entirely over the selection of the point under attack. Thus our hypothesis test is targeted at the specific model under attack, rather than a random model sampled from the same distribution. This better fits the actual threat model. We will add a discussion of this to the paper.
> *Theorem 1 assumes that is closed under additive shifts. Are there any examples of such hypothesis classes?*
Yes! Many hypothesis classes are closed under additive shifts. This includes linear and polynomial regression models, regression tree models, and any neural network architecture that has a bias/offset term. More generally, any model architecture can be made to be closed under additive shifts by adding a single additional parameter (a bias/offset term, when one does not already exist). Thus we view the additive shift assumption to be extremely mild, and easily enforceable if necessary.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response. The rebuttal and global response have fully addressed our concerns and we have no follow-up questions. We will keep our score. | null | null | null | null |
Tailoring Self-Attention for Graph via Rooted Subtrees | Accept (poster) | Summary: The paper introduces Subtree Attention (STA), a new graph attention mechanism that overcomes limitations of existing mechanisms in graph learning. STA combines fully-attentional structure with rooted subtrees, approximating masked global attention under extreme settings. By computing attention weights among multi-hop neighbors directly, STA addresses previous mechanism problems. The authors present STAGNN, a graph neural network that utilizes STA and a hop-aware attention strategy. Extensive evaluations on ten node classification datasets show that STA-based models outperform existing graph transformers and mainstream GNNs.
Strengths: 1, This paper proposes a novel multi-hop graph attention mechanism, Subtree Attention (STA), which bridges the fully-attentional structure and the rooted subtree.
2, By incorporating kernelized softmax, they develop an optimized approach for STA, resulting in a linear time complexity.
3, Experiments have shown that STAGNN achieves good performance on Node classification tasks.
Weaknesses: The performance improvement of NAGphormer is limited, as demonstrated in Table 1. For instance, when evaluating photos, NAGphormer achieved a score of 95.49±0.11, while STAGNN achieved 95.64±0.27. The difference between 95.49 and 95.64 is only 0.15, which is smaller than 0.27. A similar trend is observed in the case of CS. Furthermore, in Physics, STAGNN performs worse than NAGphormer. Additionally, it is worth noting that the variance of STAGNN appears to be much larger than that of NAGphormer.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Why do you only evaluate STA on the node classification task? What about Graph Classification?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful feedback on our manuscript. We provide the following detailed responses to your major concerns.
> Q1. "The performance improvement of NAGphormer is limited, as demonstrated in Table 1. For instance, when evaluating photos, NAGphormer achieved a score of 95.49±0.11, while STAGNN achieved 95.64±0.27. The difference between 95.49 and 95.64 is only 0.15, which is smaller than 0.27. A similar trend is observed in the case of CS. Furthermore, in Physics, STAGNN performs worse than NAGphormer. Additionally, it is worth noting that the variance of STAGNN appears to be much larger than that of NAGphormer."
A1. Thank you for bringing up this point. Firstly, of the ten node classification datasets we tested, our model achieved the highest average scores on nine. This underscores the competitive performance of STAGNN.
| Method | Pubmed | CoraFull | Computer | Photo | CS | Physics |
|------------|------------:|-----------:|-----------:|-----------:|-----------:|-----------:|
| NAGphormer | 89.70±0.19 | 71.51±0.13 | 91.22±0.14 | 95.49±0.11 | 95.75±0.09 | 97.34±0.03 |
| STAGNN | 90.46±0.22 | 72.65±0.36 | 91.72±0.30 | 95.64±0.27 | 95.77±0.16 | 97.09±0.18 |
Additionally, the performance of STAGNN and NAGphormer is tabulated above. From the table, STAGNN and NAGphormer have comparable scores on Photo, CS, and Physics, where both models already achieve very high scores (above 95%). In reality, due to inherent noise in graph datasets, pushing for even higher accuracy in such scenarios can be challenging (To the best of our knowledge, we have not observed higher scores on these three datasets in other literature). A more apt observation might be that both STAGNN and NAGphormer perform exceedingly well on these datasets.
Furthermore, we would like to note that STAGNN is merely a basic application of STA. As pointed out in line 249 in our manuscript, STAGNN employs a rather simple structure. Despite this simplicity, the fact that STAGNN can surpass NAGphormer in performance further showcases the superiority of the STA mechanism.
Lastly, we would like to highlight the theoretical differences between STA and NAGphormer. In brief:
- Hop2Token initially aggregates the node representations. Subsequently, it leverages these representations to derive keys, queries, and values.
- STA, on the contrary, computes the keys, queries, and values for each node initially and subsequently propagates the keys and values using a message-passing mechanism.
The approach employed by Hop2Token, which involves message propagation on node representations, remains susceptible to challenges like over-smoothing and over-squashing. In contrast, STA's methodology of adapting graph attention through the message-passing paradigm remains immune to such issues. This ensures STA with superior theoretical properties (and, correspondingly, empirical results).
We have attached a PDF in the Author Rebuttal which clearly illustrates the distinctions between STAGNN and NAGphormer. We kindly invite you to take a glance at the attached PDF.
> Q2. "Why do you only evaluate STA on the node classification task? What about Graph Classification?"
A2. Thanks for the reviewer's suggestion. STAGNN is mainly designed for node classification tasks, hence we do not evaluate it on graph classification. However, STA, as a promising alternative to self-attention in the graph domain, can naturally be applied to graph classification tasks. We have conducted an additional experiment. We replace the self-attention module in the vanilla transformer with our STA module and apply this new STA+Transformer model for graph classification tasks.
|Long Range Graph Benchmark (LRGB) | | Peptides-func (graph classification)| Peptides-struct (graph regression) |
|--------------|------------:|------------:|-----------:|
| | #Params. | AP ↑ | MAE ↓ |
| GCN|508k| 59.30±0.23| 34.96±0.13 |
| GCNII|505k| 55.43±0.78 | 34.71±0.10 |
| GINE |476k| 54.98±0.79| 35.47±0.45 |
| GatedGCN |509k| 58.64±0.77|34.20±0.13 |
| Transformer+LapPE |488k| 63.26±1.26| 25.29±0.16 |
| **STA**+Transformer+LapPE |488k|**65.83**±**0.94**| **24.16**±**0.21**|
We notice that by solely substituting the self-attention module with STA, we achieve noticeable improvements in performance. This demonstrates STA's capability in handling graph-level tasks.
A more detailed analysis of this experiment can be found at the third Q&A in the Author Rebuttal. | Summary: This paper introduces a novel multi-hop graph attention mechanism called SubTree Attention (STA) to address the limitations of both local and global attention in Graph Neural Networks (GNNs). STA allows the root node to attend directly to further neighbors in the subtree, enabling it to gather information from the entire rooted subtree within one layer. This mechanism avoids issues associated with deep architectures using local attention layers and captures the hierarchical neighborhood structure more effectively than global attention. The authors propose an efficient algorithm based on kernelized softmax to calculate attention among multi-hop neighbors. They also provide a theoretical analysis of STA, demonstrating its convergence to a degree-masked global self-attention. The paper introduces STAGNN, a multi-hop graph attention network that integrates the STA module into decoupled GCNs, achieving superior performance compared to existing GNNs and graph transformers. Experimental evaluations on node classification datasets confirm the effectiveness of STA and the competitive performance of STAGNN.
Strengths: * The authors propose a strategy to incorporate attention in multi-hop propagation GNNs and get good performances.
* Good theoretical analysis is provided.
Weaknesses: * Lack of baselines (BernNet,...). The authors ignore some important baselines of propagation-based GNNs, including BernNet and successive work.
* The kernalized attention is a method from Transformer variants. Need citation.
* Datasets are not large, should include OGB-datasets, at least ogbn-arxiv
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: * How is Graph Transformer applied to node-classification? Is it the pipeline described? Because graph for node-classification is larger than graphs for graph-level tasks? Maybe subgraph sampling is needed?
* Two questions about novelty. The STAGNN is an attention-based method to combine multi-hop information of graphs? The difference of using vanilla transformer and STA is that STA has kernalized attention? If not, can you briefly describe the novelty here?
* Is the multi-head attention applied in STAGNN?
* If the attention will converge to degree-based global attention, why will STA work? Is the convergence achieved in training?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the reviewer's thoughtful feedback. We provide the following detailed responses to your major concerns.
> Q1. "Lack of baselines (BernNet,..) The authors ignore some important baselines of propagation-based GNNs, including BernNet and successive work."
A1. Indeed, BernNet is a compelling work in the realm of spectral GNNs. We conduct a supplementary experiment where we take BernNet as an additional baseline:
Method | Pubmed| CoraFull | Computer | Photo | CS | Physics
- | - | - | - | - | - | -
BernNet | 89.12±0.33 | 67.88±0.29 | 88.8±0.34 | 94.21±0.16 | 94.64±0.10 | 96.23±0.11
STAGNN | 90.46±0.22 | 72.65±0.36 | 91.72±0.30 | 95.64±0.27 | 95.77±0.16 | 97.09±0.18
We can observe that STAGNN consistently outperforms BernNet. This underscores the exceptional performance of STA.
> Q2. "The kernelized attention is a method from Transformer variants. Need citation."
A2. Indeed, we have cited several kernelized-related papers (line 104, 106, 149). We will take the reviewer's suggestion and offer a clearer introduction to kernelized methods.
Additionally, we kindly invite the reviewer to take a glance at **the first Q&A in the Author Rebuttal** for a more detailed explanation of the role kernelized method plays in STA.
> Q3. "Datasets are not large, should include OGB-datasets, at least ogbn-arxiv"
A3. We appreciate the suggestion from the reviewer. As recommended, we conduct an additional experiment on the ogbn-arxiv dataset. We treat ogbn-arxiv as an undirected graph. The results of the experiment are as follows:
Method | ogbn-arxiv
-|-
MLP | 55.50±0.23
GCN | 71.74±0.29
JKNet | 72.19±0.21
UniMP | 73.11±0.20
STAGNN | 75.42±0.35
We can observe that even on a larger-scale dataset, STAGNN still maintains highly competitive performance.
> Q4. "How is Graph Transformer applied to node classification? Is it the pipeline described? Because graph for node-classification is larger than graphs for graph-level tasks? Maybe subgraph sampling is needed?"
A4. Indeed, datasets used for node classification are often larger than those for graph classification. There exist methods using graph transformers in conjunction with subgraph sampling for node classification tasks [1, 2]. In our experiments, however, we aim for a fair comparison between STA and vanilla self-attention in the context of topological data structures. Therefore, our Graph Transformer baselines do not employ these sampling techniques.
While different graph transformers in our baselines have their unique designs, such as learnable positional encodings [3] and GNNs as auxiliary [4], these models employ a straightforward pipeline where each node is treated as a distinct token. They are then fed into the transformer model, yielding a final representation for each node.
> Q5. "Two questions about novelty. The STAGNN is an attention-based method to combine multi-hop information of graphs? The difference of using vanilla transformer and STA is that STA has kernalized attention? If not, can you briefly describe the novelty here?"
A5. This is a valuable question. We would like to highlight that STA is very different from other graph attention approaches, and **this differentiation goes beyond its use of kernelized attention**. Given that this is a critical issue raised by multiple reviewers, we have provided a detailed response in the Author Rebuttal. We kindly invite the reviewer to refer to **the first Q&A in the Author Rebuttal** for a thorough explanation.
Furthermore, we kindly invite you to review the PDF attached to the Author Rebuttal above. In that pdf, we visually illustrate the novel parts of STA.
> Q6. "Is the multi-head attention applied in STAGNN?"
A6. Yes, in STAGNN, we employ a carefully crafted multi-head attention mechanism (section 3.3). We have also conducted an ablation study on multi-head attention within STAGNN in Appendix G.1 and demonstrated its efficacy.
> Q7. "If the attention will converge to degree-based global attention, why will STA work?"
A7. We fully understand the reviewer's concern. We would like to clarify that **it's not** that STA converges to degree-based global attention, but rather, $\text{STA}_k$ converges to degree-based global attention as $ k $ increases (section 3.4). And STA contains $\text{STA}_0$, $\text{STA}_1$, $\text{STA}_2$ ... And STA can finally employ various hop-aggregation strategies to adaptively adjust the weights of different hops (e.g. GPR-like aggregation used in STAGNN).
We highlight that $\text{STA}_1$ functions as local attention, while $\text{STA}_k$ converges to degree-masked global attention. Thus **STA theoretically includes every intermediate step between local and global attention** (line 207).
> Q8. "Is the convergence achieved in training?"
A8. In fact, this convergence is intrinsic and remains unaffected by the training process. Consider both $\text{STA}_k (Q,K,V)$ and $\text{SA} (Q,K,V)$ as functions of $Q, K, V$. As illustrated in section 3.4, $\text{STA}_k (Q,K,V)$ converges to $\text{SA} (Q,K,V)$ as $k$ increases. This convergence is solely determined by $k$ and is unaffected by the values of $Q, K, V$. Conversely, the training process only modifies $Q, K, V$.
Theoretically, this convergence stems from the properties of random walks on graphs. If a random walk repeats infinitely, then the probability of landing on a specific node depends on the degree of that node. This explains why $\text{STA}_k$ converges to degree-based global attention.
[1]. Q. Dai, et al. A Self-Attention Network based Node Embedding Model. ECML
2020.
[2]. Z. Zhang, et al. Hierarchical Graph Transformer with Adaptive Node Sampling. NeurIPS 2022.
[3]. D. Kreuzer, et al. Rethinking Graph Transformers with Spectral Attention. NeurIPS 2021.
[4]. L. Rampášek, et al. Recipe for a General, Powerful, Scalable Graph Transformer. NeurIPS 2022.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response. Most of my concerns are addressed. The authors need to fix their final version carefully. I will change my score to weak accept.
---
Reply to Comment 1.1.1:
Title: Many thanks
Comment: Many thanks for your positive remarks. We are committed to further refining our work and making the necessary improvements to address any concerns. Thank you for the opportunity to enhance the quality of our research. | Summary: The present manuscript proposes a novel graph attention layer, that lies in the middle of the local aggregation scheme of message passing and the global (non-structured) nature of a full-attention. The newly proposed Subtree Attention constructs for each node the similarity pairs of key and query matrices from its k-hop neighborhood.
Given the transformed matrices, they perform an efficient aggregation throughout the neighborhood hops by considering the random walk adjacency matrix as the transition matrix that is used in the graph attention. They show that given the utilization of the random walk matrix combined with a kernelized softmax function, they reduce the computational cost to $\mathcal{O}(\mathcal{E})$, where $\mathcal{E}$ the number of graph edges.
The authors prove theoretically that given a set of assumptions, the Subtree attention can approximate the global attention, showing that the proposed layer is able to tackle over-smoothing and over-squashing problems that come with MPNNs, while bridging the local with the global attention.
Their experimental study shows that STAGNN (their method) has a superior performance with respect to the state-of-the-art graph transformer models.
Strengths: - The Subtree attention is a novel and interesting idea on bridging the notions of the local neighborhood attention with the global graph attention.
- The theoretical study shows that subtree attention can approximate global attention, which can be very useful for treating over-smoothing in MPNNs.
- Empirically, the model STAGNN seems to outperform its competitor baselines.
- The paper is well-written and easy to follow.
Weaknesses: - The experimental study is limited to six benchmark datasets, which it is unclear whether long-range interactions play any role, or instead shallower architectures are, also, effective.
- It would be very useful testing STAGNN, also, in other datasets (e.g. from OGB), that reportedly depend on long-range interactions, or in synthetic datasets, so that the impact of the proposed model can be shown better.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Can the authors show any empirical evidence that the examined datasets can show a need for deeper models? In other words, do these datasets consist of any long-range interactions, or do they consist only of shallow interactions?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors make a discussion on the supplementary material over possible extensions and ways to improve the STAGNN by incorporating it to more models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's constructive feedback and positive remarks on our work. We provide the following detailed responses to your major concerns.
In order to organize our response logically, we jointly address the first weakness and the question raised by the reviewer.
>Q1. "The experimental study is limited to six benchmark datasets, which it is unclear whether long-range interactions play any role, or instead shallower architectures are, also, effective."
> "Can the authors show any empirical evidence that the examined datasets can show a need for deeper models? In other words, do these datasets consist of any long-range interactions, or do they consist only of shallow interactions?"
A1. In fact, we have conducted evaluations on 10 datasets in our paper. Kindly refer to Figure 3 (4 datasets) and Table 1 (6 datasets). These datasets come from real-world scenarios. Given the intricate relationships in real-world networks, it appears challenging to definitively state whether they rely more on long-range or short-range relationships or a blend of both. However, we would like to note that these datasets are frequently used as standards in assessing the capabilities of GNNs. In fact, numerous graph neural network methodologies [1,2,3] have utilized these datasets (or part of them) as performance benchmarks. Testing on these datasets offers a fair representation of our model's performance.
At the same time, we do have a piece of indirect evidence that reflects the level of involvement of the long-range interaction: **the GPR weights** within STAGNN. GPR weights are the weights learned for each hop when STAGNN adaptively aggregates the information across different hops. They can be considered as a series of 'importance scores' that the model deduces during training for different hops.
Let us give an example. The GPR weights for the Cora and Actor datasets are visualized in Figure 7 (Appendix section F). For instance, in the Cora dataset, we observe an increasing trend in the GPR weights over hops ranging from 0 to 10. This suggests that the model values relationships that span longer distances than relationships of just one or two hops. In contrast, for the Actor dataset, there's a noticeable decline in GPR weights over hops ranging from 0 to 10, highlighting the model's preference for shorter relationships. In summary, the GPR weights adaptively learned by STAGNN during the training process could serve as indirect evidence of which order of neighbors (longer or shorter relationships) the model primarily focuses on. We believe this also showcases the strong interpretability of STA.
We appreciate the valuable question from the reviewer. While these datasets might not clearly show STAGNN's ability to handle long-range interactions, we've added an additional experiment using the Long Range Graph Benchmark dataset (see our answer to question 2).
Finally, we would like to note that STA isn't born for handling long-range interactions. STA is born to be an all-around player. We theoretically prove that STA includes both local and global attention (approximately), as well as every intermediate step between local and global attention (line 207), and thus STA is more like an all-around hand for both long and short interactions.
>Q2. It would be very useful testing STAGNN, also, in other datasets (e.g. from OGB), that reportedly depend on long-range interactions, or in synthetic datasets, so that the impact of the proposed model can be shown better.
A2. We greatly appreciate the suggestion provided by the reviewer. Based on this recommendation, we opt for two graph datasets from the **Long Range Graph Benchmark (LRGB)** to conduct an experiment [4]. LRGB includes 5 graph learning datasets that require long-range interactions to achieve strong performance in a given task.
We aim to demonstrate that STA (rather than STAGNN) possesses the capability to capture long-range interactions. Thus, we replaced the self-attention module in the vanilla transformer with our STA module and employed this new STA+Transformer model for long-range interaction learning tasks.
We elaborate on this experiment in detail in the Author Rebuttal. We kindly invite you to take a look at the **third Q&A in the Author Rebuttal**. Below are the experimental results:
|Long Range Graph Benchmark (LRGB) | | Peptides-func (graph classification)| Peptides-struct (graph regression) |
|--------------|------------:|------------:|-----------:|
| | #Params. | AP ↑ | MAE ↓ |
| GCN|508k| 59.30±0.23| 34.96±0.13 |
| GCNII|505k| 55.43±0.78 | 34.71±0.10 |
| GINE |476k| 54.98±0.79| 35.47±0.45 |
| GatedGCN |509k| 58.64±0.77|34.20±0.13 |
| Transformer+LapPE |488k| 63.26±1.26| 25.29±0.16 |
| **STA**+Transformer+LapPE |488k|**65.83**±**0.94**| **24.16**±**0.21**|
By solely substituting the self-attention module with STA, we achieved noticeable improvements in performance. This highlights STA's ability to handle long-range interactions effectively.
[1]. E. Chien, et al. Adaptive Universal Generalized PageRank Graph Neural Network. ICLR 2021.
[2]. Q. Wu, et al. NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification. NeurIPS 2022.
[3]. J. Chen, et al. NAGphormer: A Tokenized Graph Transformer for Node Classification in Large Graphs. ICLR 2023.
[4]. VP. Dwivedi, et al. Long Range Graph Benchmark. NeurIPS 2022 Track on Datasets and Benchmarks. | Summary: In this paper, the authors propose a novel Graph Transformer called STA-GNN (SubTree-attention-GNN) to address the over-smoothing and over-squashing in the message-passing scheme. Different from the previous Graph Transformer, STA-GNN's attention mechanism is called SubTree Attention (STA), which computes a node's attention according to different levels of its subtree. To overcome the exponentially growing computation cost STA brings, the authors leverage the linearized attention technique raised by [19]. Experiments show that STAGNN outperforms SOTA GNN on node classification tasks.
Strengths: 1. The authors provide a thorough demonstration of STA's workflow with a theoretical analysis of how STA addresses over-smoothing and over-squishing. Besides. The authors propose an effective computing algorithm, which decreases STA time complexity.
2. The experiment result outperforms SOTA GCN and Graph Transformer.
3. Ablation studies show that STA is beneficial to global attention in graph learning.
Weaknesses: 1. STA's timing performance is unclear. The authors leverage kernelized Softmax to design an efficient algorithm with time complexity O(epsilon) (the number of edges). However, it is hard to compare STA with Graph transformers' (O(N) where N is the number of nodes) from the time complexity aspect, since the number of nodes and edges are independent in an undirected graph. This paper lacks experiments on STA's and Graph Transformers' timing performance. Without evidence, people will worry about if STA can achieve superior performance within a reasonable time limit.
2. Analysis of STA's space complexity is missing. STA efficient algorithm reduces the number of calculations by storing and re-using previous calculation results for future calculation. But there is no discussion on the potential problem of STA's memory cost. A memory cost comparison can better show the efficiency of STA.
3. Hop2Token used in NAGphormer [1] also learns the graph at the level of hops like STA. The experiment results against with NAGphormer does not seem very superior, especially when the experiment is not comprehensive to include a comparison of time and memory cost.
4. Equation (5) doesn't match the context in Figure 1 (a). N is defined as the number of nodes at the start of Section 2. However, figure 1 (a) says the STA_{k} should only attend the k-th hop neighbors, which is conflict with Equation (5).
5. Lack of explanation when introducing the rewriting from equation (3) to equation (4) (in Sec. 2.2). It is not obvious how to rewrite a similarity function with a selected kernel and feature map. [2] is about how to rewrite the similarity function using a linear kernel, which should be cited here.
[1] J. Chen, K. Gao, G. Li, and K. He. Nagphormer: Neighborhood aggregation graph transformer for node classification in large graphs. CoRR, abs/2206.04910, 2022.
[2] A. Katharopoulos, A. Vyas, N. Pappas, and F. Fleuret. Transformers are rnns: Fast autoregressive
transformers with linear attention. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 5156–5165. PMLR, 2020.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In section 5.2, the authors claim "In contrast to MP-GNNs, STAGNN maintains robust performance even when the height of the subtree reaches 100". Where is the source of "the performance of MP-GNNs doesn't maintain robust when the height of the subtree reaches 100"?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: No limitations are mentioned in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed comments and valuable questions. We provide details to clarify your major concerns.
> Q1 & Q2. (Concerns related to time and space complexity)
A1 & A2.
Firstly, we would like to highlight one point: while we indeed introduce an efficient algorithm, the primary motivation of this algorithm is to **address the implementation challenges associated with STA** (line 136). The goal **is not** to make STA faster or lighter compared to other graph attention mechanisms.
STA is specifically tailored for graph data with topological structures, which results in a more intricate computation process. We craft the algorithm to ensure the feasibility of STA, not with the primary intent of outperforming other methods in computational efficiency. Thus, we do not compare time and space complexity with other graph learning methods in our paper.
We appreciate the reviewer for pointing out the importance of this aspect. Heeding the advice of the reviewer, we conduct an additional experiment focusing on time and space complexity:
| | Cora| | Actor| | Deezer| |
|--------------|:------------|:-------|:-----------|:-------|:-----------|:-------|
| | Memory(MB) | Time(s/epoch)| Memory(MB) | Time(s/epoch)|Memory(MB) | Time(s/epoch)|
|GCN|1,058| 0.007| 1,136 | 0.018| 4,012| 0.103|
|GraphGPS|3,280| 0.043| 4,583 | 0.075|OOM| OOM|
|STAGNN|1,263| 0.008| 1,341 | 0.025| 4,896| 0.136|
It's worth noting that the training time and memory required by STAGNN are similar to GCN. **This is not a coincidence.** This is due to our proposed efficient algorithm for STA (section 3.2) where we leverage **the Message-passing scheme** to compute graph attention, which ensures that **STAGNN has similar time and space complexity compared to vanilla Message-passing GNNs (especially, GCN)**, which is completely acceptable.
Additionally, we'd like to note that there is potential for making STA **even faster**. Since STA employs a Message-passing scheme to compute attention, it can be combined with existing GNN acceleration techniques (e.g. [1]) to architect an even quicker graph attention methodology. We are willing to delve deeper into it in future work.
> Q3. "Hop2Token used in NAGphormer also learns the graph at the level of hops like STA. The experiment results against with NAGphormer does not seem very superior, especially when the experiment is not comprehensive to include a comparison of time and memory cost."
A3. Since this is a common question raised by multiple reviewers, we provide a unified reply in the Author Rebuttal at the top of this page. We kindly invite you to take a glance at **the second Q&A of the Author Rebuttal**.
Furthermore, we kindly invite you to take a look at the pdf attached in the Author Rebuttal. This pdf clearly illustrates the superior theoretical properties of STA compared to NAGphormer.
> Q4. "Equation (5) doesn't match the context in Figure 1 (a). N is defined as the number of nodes at the start of section 2. However, figure 1 (a) says the $\text{STA}_k$ should only attend the $k$-th hop neighbors, which conflicts with Equation (5).
A4. Equation (5) **does match** Figure 1 (a). In fact, Equation (5) can be found at the top-right corner of Figure 1 (a).
We understand the reviewer's concern regarding the definition of "N". Please notice that within Equation (5), $A^k$ is incorporated, which implies $\text{STA}_k$ should only attend to the $k$-th hop neighbors".
We hope that this explanation has addressed the reviewer's concern.
> Q5. "Lack of explanation when introducing the rewriting from equation (3) to equation (4) (in section 2.2). It is not obvious how to rewrite a similarity function with a selected kernel and feature map. [2] is about how to rewrite the similarity function using a linear kernel, which should be cited here."
A5. In fact, [2] has been cited in our paper (line 149). Additionally, we have cited multiple kernelized-related works (line 104,106). We will take the reviewer's suggestion and cite these papers at the beginning of our introduction to kernel methods.
> Q6. "In section 5.2, the authors claim "In contrast to MP-GNNs, STAGNN maintains robust performance even when the height of the subtree reaches 100". Where is the source of "the performance of MP-GNNs doesn't maintain robust when the height of the subtree reaches 100"?"
A6. Our claim about the depth limitation of MP-GNNs stems from the widely accepted consensus within the graph neural network community. In [3], it's mentioned that "GNNs suffer a model depth limitation—they tend to perform increasingly worse on classifying graph nodes as the model gets deeper." Another work [4] highlights that "GNNs are susceptible to a bottleneck when aggregating messages across a long path." (Here, "long path" is synonymous with deep subtree) From a more theoretical perspective, the study [5] mentions in Theorem 2 that there's "an exponential information loss of GCNs in terms of the layer size."
In fact, there's no need for such a large number as 100. 7 is enough. [6] has empirically observed that GCN, when evaluated on the Cora dataset, exhibits an accuracy of approximately 37% upon reaching a subtree depth of 7. In contrast, our STAGNN showcases a performance of 89.2% at the same subtree depth.
[1]. H. Zeng, et al. Graphsaint: Graph sampling based inductive learning method. ICLR 2020.
[2]. A. Katharopoulos, et al. Transformers are rnns: Fast autoregressive transformers with linear attention. PMLR, 2020.
[3]. K. Zhou, et al. Understanding and resolving performance degradation in deep graph convolutional networks. CIKM 2021.
[4]. U. Alon, et al. On the Bottleneck of Graph Neural Networks and its Practical Implications. ICLR 2021.
[5]. K. Oono, et al. Graph Neural Networks Exponentially Lose Expressive Power for Node Classification. ICLR 2020.
[6]. M. Liu, et al. Towards deeper graph neural networks. KDD 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the response to my concerns. My major concerns are well addressed. I'm willing to increase my score to 5.
---
Reply to Comment 1.1.1:
Title: Thank you for your positive feedback
Comment: We appreciate the reviewer's positive feedback. We remain dedicated to ongoing improvement. And we thank the reviewer for helping us improve the quality of our paper. | Rebuttal 1:
Rebuttal: We extend our sincere gratitude to the reviewers for their invaluable feedback on our work.
We are delighted to see comments such as "**The Subtree attention is a novel and interesting idea**" (Reviewer smZ8), "**Good theoretical analysis is provided.**" (Reviewer K8yB), and "**can be very useful for treating over-smoothing in MPNNs**" (Reviewer smZ8).
We have carefully considered each comment. Below, we address three common questions raised by some reviewers:
> Q1. How does STA differ from the vanilla graph transformer and NAGphormer? Where is its novelty?
A1. The novelty of STA proposed in this paper can be summarized in two main points.
Firstly, STA presents a novel method for graph attention calculation. We demonstrate that **STA theoretically includes both local and global attention, as well as every intermediate step between local and global attention** (line 207). $\text{STA}_1$ functions as local attention, while $\text{STA}_k$ converges to degree-based global attention as $ k $ increases (section 3.4). And STA contains $\text{STA}_0$, $\text{STA}_1$ ... $\text{STA}_k$ and thus can better capture the hierarchical neighborhood structure than existing graph attention methods.
We provide an intuitive comparison of STA, vanilla graph transformer, and NAGphormer in the attached PDF (You can view it at the bottom of this Author Rebuttal). **We kindly invite you to take a minute to review the attached PDF**.
Secondly, while the idea behind STA is straightforward, its computation is more intricate compared to other vanilla graph attention. We innovatively use kernelized methods to address this issue. To be clear, we **are not** the pioneers in applying kernelized methods in the graph attention domain, but **we are the first to fuse message-passing frameworks with graph attention computations using kernelized methods**. In simpler terms, we initially compute the key, value, and query for each node, followed by the propagation of the key and value across the graph. This idea of computing graph attention is novel and, in our belief, holds vast potential for extensions, such as integration with edge sampling methods.
> Q2. STAGNN doesn't seem to have achieved significant improvements over NAGphormer?
A2. Firstly, of the ten node classification datasets we tested, our model achieved the highest average scores on nine. This underscores the competitive performance of STAGNN.
| Method | Pubmed | CoraFull | Computer | Photo | CS | Physics |
|------------|------------:|-----------:|-----------:|-----------:|-----------:|-----------:|
| NAGphormer | 89.70±0.19 | 71.51±0.13 | 91.22±0.14 | 95.49±0.11 | 95.75±0.09 | 97.34±0.03 |
| STAGNN | 90.46±0.22 | 72.65±0.36 | 91.72±0.30 | 95.64±0.27 | 95.77±0.16 | 97.09±0.18 |
Additionally, the performance of STAGNN and NAGphormer is tabulated above. From the table, STAGNN and NAGphormer have comparable scores on Photo, CS, and Physics, where both models already achieve very high scores (above 95%). In reality, due to inherent noise in graph datasets, pushing for even higher accuracy in such scenarios can be challenging (To the best of our knowledge, we have not observed higher scores on these three datasets in other literature). A more apt observation might be that both STAGNN and NAGphormer perform exceedingly well on these datasets.
Lastly, we would like to note that STAGNN is merely a basic application of STA. As pointed out in line 249 in our manuscript, STAGNN employs a rather simple structure. Despite this simplicity, the fact that STAGNN can surpass NAGphormer in performance further showcases the superiority of the STA mechanism.
> Q3. The main contribution of this paper is to accelerate Graph Attention with kernelized methods?
A3. The answer is **NO**. We emphasize that our primary contribution is **SubTree Attention**. The motivation behind STA is its superior capability and robust theoretical properties. Beyond this main objective, we employ a kernelized approach to devise a linear complexity algorithm for computing STA and introduce STAGNN. Compared to STA, both the efficient computation of STA and STAGNN are secondary contributions.
STA can do more. STA can be employed to enhance existing attention-based graph methods. This potential is also echoed by "Ablation studies show that STA is beneficial to global attention in graph learning." (Reviewer sk7A).
To better showcase the potential of STA, we conduct an additional experiment. We replace the self-attention module in the vanilla transformer with our STA module and apply this new **STA+Transformer** model for graph classification tasks. The experimental results are as follows:
|Long Range Graph Benchmark (LRGB) | | Peptides-func (graph classification)| Peptides-struct (graph regression) |
|--------------|------------:|------------:|-----------:|
| | #Params. | AP ↑ | MAE ↓ |
| GCN|508k| 59.30±0.23| 34.96±0.13 |
| GCNII|505k| 55.43±0.78 | 34.71±0.10 |
| GINE |476k| 54.98±0.79| 35.47±0.45 |
| GatedGCN |509k| 58.64±0.77|34.20±0.13 |
| Transformer+LapPE |488k| 63.26±1.26| 25.29±0.16 |
| **STA**+Transformer+LapPE |488k|**65.83**±**0.94**| **24.16**±**0.21**|
By solely replacing the self-attention module with STA, we achieve noticeable improvements in performance. This experiment shows us the capabilities of STA in three different dimensions:
- STA can capture **long-range relationships** in these Long Range Graph Benchmarks. (Reviewer smZ8)
- STA can be applied to **graph-level** tasks and achieve competitive performance. (Reviewer ds4S)
- STA, as **a strong alternative to self-attention in the graph domain**, has the potential to replace and enhance existing attention-based graph learning methods.
---
We conclude by kindly inviting you to take a glance at this attached pdf. It intuitively illustrates how STA differs from vanilla graph attention methods.
Pdf: /pdf/0f2aef1c8dab7073bda1b3bdcd896d5eac7706e1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Object-Centric Learning for Real-World Videos by Predicting Temporal Feature Similarities | Accept (poster) | Summary: The authors combine several recent approaches for object-centric learning. In particular, they use the overall framework which is a combination of SAVi with a slot mixer decoder from [13]. The main objective is optical flow prediction, like in SAVi, but they combine it with DINO feature reconstruction from Dinosaur [9]. They also adopt a pre-trained and frozen DINO backbone from Dinosaur. The only novel component in this framework is that they estimate the ground truth optical flow using the model itself by computing similarity between encoder features in consecutive frames. The way this is done is also adopted from prior work (CRW [41]). Another delta with respect to SAVi is that they predict the future optical flow t -> t + 1 at time time t, in contrast to the standard flow estimation objective (t -1 -> t), but the ablations don't show whether this makes a difference.
In the experimental evaluation on MOVi-C and MOVi-E they outperform some baselines but do not compare to the sota methods [9, 36]. They additionally report results on the real world YouTube-VIS benchmark where they outperform the same baselines, but the results are low for all the methods. Ablation demonstrates that on the real videos most of the performance is achieved by the Dino feature reconstruction objective from [9] and the flow prediction objective is fairly ineffective in the real world.
Strengths: The proposed approach seems sound.
A relatively thorough ablation study is provided.
The paper is relatively well written in terms of the language (but not the notation or organization).
State-of-the-art results are reported on the MOVi-E benchmark.
Weaknesses: The proposed approach is merely a combination of several recent techniques for object centric learning. This wouldn't necessarily be a problem in itself, but the authors are not being fully upfront about it. In particularly, they do not explain that their 'feature similarity loss' is equivalent to optical flow prediction from SAVi (though they do show in Figure 3 that the learned feature similarities are effectively optical flow). In any case, the novelty of the proposed framework is minimal.
The authors do not compare to the state-of-the-art video-based and image-based approaches ([36] and [9]) in the main paper. They do compare to [9] on MOVi in the supplementary, but not on YouTube-VIS where that method is expected to be most effective (see ablation in Table 3). According to results on MOVi-C from the supplementary of [36] it outperforms the proposed approach. Thus it only seems to show top performance on MOVi-E.
The results on YouTube-VIS are low for all the reported approaches. The numbers seem to indicate that the proposed methods simply fails less catastrophically than the baselines in the real world.
Although a fairly thorough ablation study is provided it is not consistent enough to understand the main reason why this method outperforms prior work on MOVi-E. For example, the effect of replacing the decoder in [9] is not evaluated. At least the full version of the proposed approach with the decoder from [9] should be reported. Another important baseline is using actual optical flow (ground truth or estimated with a pre-trained model) in place of P_{t, t+1}.
The authors claim to show generalization of the proposed approach by evaluating a model trained on YouTube-VIS on DAVIS, but these experiments are not indicative of anything. It is well known in the video segmentation community that the data distributions of these two datasets are very close and models trained on YouTube-VOS/VIS perform very well on DAVIS without fine-tuning (fine-tuning brings only marginal improvements). It would be better to replace these results with a proper comparison to the state-of-the-art and a more thorough ablation analysis.
There are still quite a few issues with the presentation. In particular, the notation is not consistent: for example y_t denotes the similarity matrix in Eq. 5 but previously it is stated that y_t is the output of the model which is used both for reconstruction and for flow prediction. Later a distinction is made between y_sim and y_rec but until that point it's hard to understand what the authors are talking about. In addition, Figure 2 is never mentioned in the text and Figure 1 is only mentioned on page 5. Captions are also not self-sufficient to parse these figures.
In addition, some of the claims made in the paper are inaccurate. In particular, the authors claim that 'the approaches that are based on optical-flow/motion masks are not self-supervised because they may be hampered by the availability of labels'. This is not true, because self-supervised flow/motion segmentation estimation methods exist.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Report results of [9] on YouTube-VIS.
Report a full ablation on MOVi-E starting from Dinosaur and arriving to the proposed approach (ablate both the decoder, the new objective and all the changed hyper-parameters).
Report results for the variant of the mode that uses gt/estimated optical flow directly in the objective.
Ablate whether predicting flow into the future makes a difference compared to the variant of the objective used in SAVi.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are addressed in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive suggestions to improve our paper.
> The proposed approach is merely a combination of several recent techniques for object centric learning. […] The novelty of the proposed framework is minimal
All in all, we find this statement not a fair characterization of our contributions. We summarize them in the general response as supported by the other reviewers. In the following discussion the points will be made more specific.
> 'Feature similarity loss' is equivalent to optical flow prediction from SAVi.
Our feature similarity loss is *different from SAVi’s optical flow prediction conceptually, mathematically, and in terms of the resulting model performance.*
- Conceptually: for the feature similarity loss, a successful prediction requires to predict where *all semantically similar patches* are located in the next frame (see e.g. Fig. 3). In contrast, the optical flow target only contains information on how much each pixel moves independently from each other. Thus, the temporal feature similarity target is still meaningful for non-moving objects, whereas the optical flow target is trivial.
- Mathematically: for optical flow prediction, SAVi uses MSE against the RGB image, leading to a point prediction. For feature similarity, we use the cross entropy loss against the patch similarity distribution, predicting a categorical distribution. This allows modeling uncertainty especially when similar patches are prevalent or when objects enter or leave the scene.
- Performance: following the reviewer's feedback, we compared optical flow targets to VideoSAUR's feature similarity targets (**Table R3**). We trained VideoSAUR with GT optical flow for the best potential performance from optical flow alone. Yet, even on the MOVi-C datasets favoring optical flow, VideoSAUR with feature similarity significantly outperformed optical flow (+10 FG-ARI). This disparity is greater in the MOVi-E dataset, highlighting VideoSAUR's resilience to static objects and camera movement. We show that our temporal feature similarity approach that doesn't need signals like GT optical flow. Additionally, we also don’t need extra datasets (**Table R4**) to perform well.
These new results now lead to a clearer picture of our contribution. We will adapt the text accordingly and rephrase the relation and comparison to optical flow in the paper.
> … similarity between encoder features in consecutive frames is also adopted from prior work (CRW [41]).
We clarify that both the loss function and the overall learning setup in CRW are different from our work in our response to reviewer L1X2.
> Report results of [9] on YouTube-VIS.
Thank you for the suggestion. To compare our method with DINOSAUR on the YouTube-VIS dataset, we train DINOSAUR and assess both approaches, along with STEVE and SAVi baselines, using image-based FG-ARI. **Table R1** displays the results. VideoSAUR outperforms DINOSAUR (+4 FG-ARI) and also surpasses video-based STEVE and SAVi methods, as shown in Table 1 of the main paper. This underscores the benefit of our temporal similarity loss over mere feature reconstruction.
> […] why this method outperforms prior work on MOVi-E.
We conducted an in-depth ablation study on the MOVi-E dataset, assessing the impact of the decoder (DINOSAUR MLP vs Mixer) and loss function choices. As shown in **Table R2**, using similarity loss improves FG-ARI for both decoders and reaches around 74 FG-ARI (vs 69 for feature reconstruction loss). The Mixer decoder enhances object mask sharpness by +5 mBO. Consequently, using similarity loss and Mixer jointly outperforms the MLP decoder with feature reconstruction loss. These findings supplement our main paper, offering a clearer insight into VideoSAUR's components and performance.
> The results on YouTube-VIS are low for all the reported approaches. […] simply fails less catastrophically than the baselines in the real world.
We respectfully disagree:
- Qualitatively: we provide extensive visualizations (Figure 4, Figure E.1, Figure E.5) and on our website, demonstrating the successful operation on many different YT-VIS examples, which waves “simply fails […] catastrophically”.
- Quantitatively: the regular pattern baseline is a good estimate of catastrophic failure. While both SAVi and STEVE perform worse or similar to that, VideoSAUR performs *twice* better, showing a clear and significant signal of “working”.
Being the first to attempt the task of fully unsupervised video-object discovery on YouTube-VIS, we present a meaningful step forward and a large improvement over all compared baselines.
> […] evaluating a model trained on YouTube-VIS on DAVIS […] are not indicative of anything.
The distribution of objects is different in DAVIS and YouTube-VIS (e.g. number of objects). The transfer shows the flexibility of the object-centric model during inference. We note that generalizing results from supervised methods might be highly misleading. Nevertheless, we would consider moving this part to the supplementary.
> Inaccurate claims
The quoted passage by the reviewer differs from our actual statement in L150-L152: “Another advantage of our loss is that it is fully self-supervised, whereas approaches based on optical flow or motion masks may be hampered by the availability of labels. This is of particular importance for in-the-wild video, where motion estimation is challenging.” It highlights our loss advantage, especially when optical flow/motion estimation is tough in in-the-wild videos.
We acknowledge the phrasing could be clearer and suggest: “Methods based on optical flow or motion masks can struggle with the need for accurate flow/motion mask labels (GT or estimated one), unlike our loss which doesn't require them.”
> Notation and captions
Thank you. We will clarify the notations for the decoder and improve the descriptions of the figures. We also note that the figures were referenced earlier in the Introduction and Section 3 using “Fig.”
---
Rebuttal Comment 1.1:
Title: Re:re
Comment: I thank the authors for their detailed response. It did address some of my concerns. In particular, the new results demonstrate that the proposed objective is indeed markedly different from optical flow prediction. However, the reasons behind its effectiveness are still not entirely clear and require further analysis. For examples, it would be informative if the authors could provide additional results requested by reviewer L1X2.
---
Reply to Comment 1.1.1:
Title: Additional experimenal results requested by reviewer L1X2
Comment: We are happy that we could address your concerns. Regarding the effectiveness of our temporal similarity loss, we now ran the experiment suggested by reviewer L1X2 (described in details in a comment to this reviewer). To summarize, we find that optimizing next and current frame feature reconstruction improves performance over just optimizing current frame feature reconstruction, but that optimizing our proposed objective **brings clear additional improvements** on all datasets.
This demonstrates that the effectiveness of our approach does not just stem from predicting the future, but also from the specific form of our similarity objective. We hope that we could address your final concerns with this experiment. If so, we would politely ask to adjust the score accordingly.
---
Rebuttal 2:
Title: Reminder from AC
Comment: Dear Reviewer
Could you please check the rebuttal, if you have further concerns ?
Best,
AC | Summary: The paper aims to learn an effective object-centric feature representation for video segmentation. They build their method upon the slot attention-based framework and self-supervised ViT encoders (DINO), and propose a temporal feature similarity loss for object-centric learning. Their proposed method yields the best performance on several synthetic and real-world datasets, for both unsupervised object discovery and downstream video object segmentation tasks.
Strengths: + The paper is well-organized and easy to follow
+ The proposed temporal feature matching loss is sound and effective
+ The experimental design is comprehensive and promising
+ The proposed VideoSAUR model yields state-of-the-art performance on several datasets
Weaknesses: I have two minor concerns regarding the method and experimental design:
- One of my concerns is the use of the pre-trained ViT encoder. The majority of the experiments are evaluated with a DINO encoder, however, DINO is pre-trained on ImageNet and DINO itself is "object-centric" (see https://github.com/facebookresearch/dino/). Though the authors have already clarified that they aim to bridge the gap between the pre-trained feature encoder and real-world video object discovery, it is still interesting to see if the proposed loss is useful *without* prior knowledge. It would be useful to add an experiment that uses a self-supervised ViT ***which is only pre-trained on the target dataset, e.g. MOVi-E***. Any self-supervised pre-training (DINO, MAE, MSN or MOCO) should be fine. This additional experiment could also give a fair comparison to other models as they, at least STEVE and SAVI, did not get access to extra data for pre-training.
- Novelty is the other concern. Though the authors claim the temporal feature matching loss is novel, the overall idea is adapted from contrastive random walk [41]. The proposed normalization function (Eq. 4) is different from the original one, but ignoring the negative-scored patches should not bring a big difference in the final similarity score. Besides the loss, the mixer decoder and reconstructing in the flow space have also already been verified to be helpful for object-centric learning.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Other questions about details:
- (L38) How does the model "groups patches with similar motion into the same slot"? Neither the temporal loss nor the reconstruction loss introduces such a grouping signal.
- (L112) how are the slots $S_0$ initialized? Are they learnable, or randomly sampled from Gaussian?
- Why the performance of VideoSAUR and several baselines have better performances on MOVi-E than MOVi-C as MOVi-E should be more challenging with both moving and static objects, and ego motion? Moreover, some previous works (e.g. [1,2]) have reported worse results of SAVI++ (unconditional version) on MOVi-E, given that SAVI should have even worse numbers compared with SAVI++, I am curious why the reported numbers of SAVI are higher? Especially for the comparison with [1] given that VideoSAUR is developed based on [1]. Is it because of the number of slots?
- Conceptually, what's the role of slot mixing under the SlotMixer? It is like an additional layer of the cross-attention union (Allocation Transformer)
Ref:
[1] Bridging the Gap to Real-World Object-Centric Learning: https://openreview.net/forum?id=b9tUk-f_aG.
[2] Object discovery from motion-guided tokens: https://arxiv.org/abs/2303.15555
### Justification for rating
Though the proposed method has some limitations, I do think the proposed VideoSAUR is interesting. Currently, I hold a borderline acceptance. If the authors could fully address my first concern in the "Weakness" section, I am willing to upgrade my rating. Will also refer to other reviewers' comments.
---
The rebuttal has addressed my concern so that I upgrade my rating.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you for your detailed review and interesting suggestions to improve our paper. Below we address your questions. We hope that our additional experiments will lead you to consider upgrading your rating.
> One of my concerns is the use of the pre-trained ViT encoder. […] It is still interesting to see if the proposed loss is useful *without* prior knowledge. It would be useful to add an experiment that uses a self-supervised ViT *which is only pre-trained on the target dataset, e.g. MOVi-E*.
>
Thanks for this suggestion. Indeed, a strength of our method is that it does not require any additional inputs other than images, so it is natural to test how much the ImageNet data bias is needed. To test this, we pre-train a ViT-B/16 encoder with the Masked Auto-Encoder (MAE) approach for 200 epochs on the MOVi-E dataset using standard hyperparameters, and then use it for training VideoSAUR (referred to as MAE+MOVi-E) with temporal features similarity loss.
The results are presented in **Table R4**. Interestingly, we observe that VideoSAUR is able to use such features for discovering objects in MOVi-E, reaching 70 FG-ARI, while not using any external data. We also use *MAE+MOVi-E* features to train VideoSAUR on MOVi-C where it reaches similar results (59.8 FG-ARI) as VideoSAUR trained with standard *MAE+ImageNet* features. Thus, even without any external data, VideoSAUR still outperforms the SAVi and STEVE baselines. Furthermore, we expect that tuning MAE hyperparameters could further improve the results.
> Novelty is the other concern. Though the authors claim the temporal feature matching loss is novel, the overall idea is adapted from contrastive random walk [41].
>
We indeed were inspired by a contrastive random walk (CRW), and we do use a similar construction of the affinities, as the reviewer correctly points out. But both the loss function and the overall learning setup in CRW are different from our work. While the supervised loss $L_{sup}$ in CRW (Eq. 3) does look similar to our loss $L^{sim}$ (Eq. 5), there are fundamental differences: in $L_{sup}$, *the affinity matrix is optimized*, while in our approach $L^{sim}$, the model is trained *to predict a fixed affinity matrix*. Thus, in CRW’s $L_{sup}$, the loss is used to maximize feature similarities between matching nodes, whereas in our approach, we train a slot representation by predicting pre-constructed self-supervised feature similarities, and use the structure in these similarities to induce an object grouping. We think that this is sufficiently different from CRW (both conceptually and mathematically) to form a novel contribution.
> Besides the loss, […] reconstructing in the flow space have also already been verified to be helpful for object-centric learning.
While optical flow prediction has indeed been shown to be beneficial for object-centric learning, we demonstrate the novel insight that temporal feature similarities can offer similar benefits but require no flow annotations, while working sensibly for static objects and being more robust to camera motion. The novelty over optical flow prediction thus lies in broadening the kind of datasets the method can be applied to.
> How does the model "groups patches with similar motion into the same slot"?
Slot attention-like models essentially learn groupings that lead to efficient reconstructions under the restricted capacity of the latent slot bottleneck. When reconstructing RGB images, these groupings e.g. capture areas of similar color, because the information needed to reconstruct such an area is efficient to represent in a single slot. Analogously, something similar happens under our temporal similarity loss: when groups of patches have similar motion patterns (temporal feature similarities), it is most efficient to represent them in the same slot in order to predict the similarities well. The same principle holds for optical flow prediction as well. Does this sufficiently address your question?
> How are the slots initialized? Are they learnable, or randomly sampled from Gaussian?
For all VideoSAUR experiments, we use i.i.d. sampling from a Gaussian with learned mean and variance as slot initialization (see L113). This way learned object representation is invariant to the object order and also more flexible as the number of objects can be varied during inference.
> Why the performance of VideoSAUR and several baselines have better performances on MOVi-E than MOVi-C [..] why the reported numbers of SAVI are higher? Especially for the comparison with [1] given that VideoSAUR is developed based on [1]. Is it because of the number of slots?
As we are using 15 slots for VideoSAUR and the other baselines on MOVi-E, their performance may differ from [1] and [2]. Secondly, [1] is evaluating a video-based model on 1-frame videos, which could be a disadvantage for SAVi as it was trained on many-frame videos. In contrast, we evaluate SAVi on the whole video, leading to better results. Similarly, for MOVi-C vs. MOVi-E, we think this also is due to the number of slots, as changing the number of slots can significantly change the results (e.g. Table 13 in [1]).
> Conceptually, what's the role of slot mixing under the SlotMixer?
Slot mixing creates a feature vector for each spatial position (patch) that is used to reconstruct that spatial position. This is done by taking a convex combination of the slots, using different weightings for different spatial positions. The weights are computed by a dot-product between the slots and the outputs of the allocation transformer. This operation is equivalent to a single-head attention step that uses the “slots” as “values”, so indeed this could be seen as just an additional (but special) layer of the allocation transformer. We give more details on this in Appendix C.1.
---
Rebuttal Comment 1.1:
Title: post-rebuttal comments
Comment: Thanks to the authors for their response. The response is well-written and can address most of my concerns.
However, after seeing the comments from other reviewers and taking another pass at the paper, I raised two other questions:
- let's first assume $y_t^{rec}$ is fully optimized, then in this case as long as we have perfect $y_t^{rec}$ and $y_{t+1}^{rec}$, we can compute a perfect $y_t^{sim}$. Then I am curious: if the temporal similarity loss works as a role to facilitate the learning of the reconstruction loss?
- as mentioned by Reviewer XQ9f, and I agree that, it is unclear about the real mechanism of the temporal consistency loss: is the performance improvement from predicting the similarity score, or from predicting **both $y_t^{rec}$ and $y_{t+1}^{rec}$** as $y_t^{sim}$ can be derived from $y_{t}^{rec}$ and $y_{t+1}^{rec}$. I think this point does matters as it affects the global positioning/picture of the paper (temporal similarity v.s. predict the future). To make it clear, slightly different from XQ9f, I think we can validate if predicting $y_{t}^{rec}$ + $y_{t+1}^{rec}$ instead of $y_{t}^{rec}$ + $y_t^{sim}$ can achieve a similar or a lower performance.
If the authors can provide the ablation, it will be greatly helpful to better understand the proposed method!
---
Reply to Comment 1.1.1:
Title: Temporal similarity v.s. predict the future ablation
Comment: Dear reviewer, thank you for your suggestion. We agree that comparing between predicting the next frame features $y_{t+1}^{rec}$ directly and only indirectly using them for temporal features similarity $y_{t}^{sim}$ is indeed valuable to disentangle the source of improvement behind our method.
The additional experimental results are presented in Tables R5-R7. We find that using our proposed temporal feature similarity ($y_t^{rec} + y_t^{sim}$) **brings clear additional performance benefits** (e.g., +13 FG-ARI MOVi-C; +6 FG-ARI MOVi-E) in comparison with next frame feature prediction ($y_t^{rec} + y_{t+1}^{rec}$). In addition, VideoSAUR with the combination of current and next frame feature reconstruction ($y_t^{rec} + y_{t+1}^{rec}$) performs significantly better than just current frame feature reconstruction ($y_t^{rec}$) (+7 FG-ARI on MOVi-C; +2 FG-ARI on YouTube-VIS), showing that predicting the future is generally helpful. We conclude that both the future prediction task and the specific way it is implemented in our temporal feature similarity are important to achieve the final VideoSAUR performance on all the datasets.
As for *why* the temporal feature similarity is better than pure next frame feature prediction, we believe it is because the similarity loss requires predicting relationships between features, which is not needed for pure feature prediction. In addition, the similarity prediction task does potentially contain more signal to optimize, as it removes unnecessary details (noise) from the prediction of particular next frame features. Thus, even though $y_t^{sim}$ could be derived from a perfect prediction of $y_t^{rec}$ and $y_{t+1}^{rec}$, in practice, optimizing it directly focuses the model on different aspects of the targets that turn out to be useful for object discovery.
**Implementation details for next frame feature prediction.** Reconstructing frame features $y_t^{rec}$ and $y_{t+1}^{rec}$ simultaneously with a single decoder is problematic, because the decoder masks we use for evaluation would be in reference to both the current and next frame. One way to overcome this problem is by using two decoders $d_{current}$ and $d_{future}$, each producing their own predictions and masks. In this case, masks from $d_{current}$ can be used for evaluation. While more powerful, this approach also requires more memory and is slower. In our experiments, we confirm that the version with two different Mixer decoders performs better than simultaneous reconstruction with one decoder (see rows 1 and 2 in Tables R5-R7). We use this better version for our comparisons even though it is heavier than our method that needs only one decoder.
> if the temporal similarity loss works as a role to facilitate the learning of the reconstruction loss?
>
Regarding this question, note that the temporal similarity loss can also be used as a standalone loss, and we show that optimizing $y_t^{sim}$ on its own yields similar results to optimizing both $y_t^{sim}$ and $y_t^{rec}$ on MOVi-C in Table 2. We also checked if optimizing $y_t^{sim}+y_t^{rec}$ leads to a lower reconstruction error compared to optimizing just $y_t^{rec}$ and found this not to be the case. Thus we conclude that the value of the temporal similarity is not just in helping the model to reconstruct better, but actually requires the model to learn additional information.
### Table R5. Future frame features reconstruction ablation on MOVi-C
| Method | Video FG-ARI | Video mBO |
|------------------------------------------------------------|--------------|-----------|
| Feat. Rec. + Next Frame Feat.Rec | 44.6 | 23.5 |
| Feat. Rec. + Next Frame Feat.Rec (two decoder heads) |47.2 | 24.7 |
| Feat. Rec. + Temp. Sim. | **60.8** | **30.5** |
### Table R6. Future frame features reconstruction ablation on MOVi-E
| Method | Video FG-ARI | Video mBO |
|------------------------------------------------------------|--------------|-----------|
| Feat. Rec. + Next Frame Feat.Rec | 61.3 | 22.1 |
| Feat. Rec. + Next Frame Feat.Rec (two decoder heads) | 62.9 | 24.0 |
| Feat. Rec. + Temp. Sim. | **69.2** | **26.0** |
### Table R7. Future frame features reconstruction ablation on YouTube-VIS
| Method | Video FG-ARI | Video mBO |
|------------------------------------------------------------|--------------|-----------|
| Feat. Rec. + Next Frame Feat.Rec | 33.4 | 24.6 |
| Feat. Rec. + Next Frame Feat.Rec (two decoder heads) | 37.9 | 27.3 |
| Feat. Rec. + Temp. Sim. | **39.5** | **29.1** | | Summary: This paper proposes VideoSAUR for unsupervised video object segmentation / grouping. The key idea proposed in this paper is to use a temporal feature similarity loss, in combination with a feature reconstruction loss. The grouping is implemented with recurrent Slot Attention. Experiments are conducted on various benchmarks (MOVi-C, MOVi-D, MOVi-E, YT-VIS-2019, YT-VIS-2021, and DAVIS) with good improvement over existing methods. Written presentation is clear and easy to follow.
Strengths: * The paper has strong experimental results: (i) solid improvements over existing methods; and (ii) a comprehensive set of ablations with different insights about the problem as well as the proposed method.
* Written presentation is clear, consistent and easy to follow and understand.
* The idea of using temporal feature similarity is not new (Vondrick et al ECCV'18, Vondrick et al CVPR'16, may be some others), but this paper shows an effective way to use it with self-supervised features, and recurrent Slot-Attention, yield strong results in unsupervised video-object segmentation.
Weaknesses: * I don't find any issue with this paper, may be further evaluate on more challenging dataset such as UVO will make this work srtonger.
* There may be a stress test to push the method into an extreme setting, e.g., a very long video of hundreds of frames, to see when the recurrent Slot Attention will be failed. This may provide hints for future work.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * As mentioned above, I would be interested in seeing (1) how this proposed approach works on more challenging datasets; (2) how this method deals with extremely long videos (when will it fail); (3) any discussion on how to deal with long videos would be nice.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: * The reviewer does not foresee any potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their very positive feedback! We are glad you like the paper for its “strong experimental results” and that you find the paper “clear, consistent and easy to follow”. We answer your questions below.
> How does the proposed approach works on more challenging datasets?
>
We agree that taking VideoSAUR to even more complex datasets is very interesting! Similarly, it would also be intriguing to see how the method scales with increasing dataset size. However, given that we already are the *first* unsupervised object-centric method to use the Youtube-VIS dataset, we think that this is out of scope for this paper.
> Dealing with long videos? When will it fail? How to deal with long video?
>
We thank the reviewer for the suggestion. Indeed better performance on long videos is one of the potential future directions for our method.
As we already discussed in Appendix B.1 and the Conclusions section there are several limitations of our current approach on long videos. First, the number of slots is static and needs to be chosen a priori, whereas, in the long videos, the objects frequently approach and disappear from the scene. Next, slots can get reassigned to each frame and can bind to different objects or the background. We visualize those failures on a long video (see in **Figure R1**).
To overcome these limitations, several innovations are needed. First, it is important to discover and maintain objects’ slots that are currently not visible in the image. This could be done with memory mechanisms and maintaining if each slot is active or not as an additional external variable (similar to $z_{present}$ in AIR-based [1] object-centric methods). Next, a slot re-identification module that matches any new objects to previously discovered, but not visible ones could further improve the performance on long videos. Finally, learning a generative latent object-centric world model (with set-based latent dynamics) could further improve the consistency of slots for object-centric representations on long videos.
[1] Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Thanks for the discussions. Please include the new figure R1 and discussion (if the space allows) if the paper gets accepted. I keep my rating unchanged. | Summary: The paper considers the problem of unsupervised video-based object-centric learning. It incorporates a temporal feature similarity loss that encodes temporal correlations and introduces a motion bias for object discovery. This loss helps to achieve state-of-the-art performance on the synthetic MOVi dataset. The model is able to learn video object-centric representation on the YouTube-VIS dataset in a fully unsupervised way.
Strengths: 1) The paper is very well-written and it is pleasant to read. The visualizations and graphics are particularly nice and done with good care.
2) The proposed VideoSAUR method significantly improves performance on synthetic video datasets over related works of SAVi and STEVE. It also to learn video object-centric representation on unconstrained real-world videos of YT-VIS.
3) The ablation study is extensively done. It considers all the main parts of the method along with the most important hyperparameters.
Weaknesses: I enjoyed the paper and I think that the paper is already in good shape for acceptance.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: No additional questions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are briefly discussed at the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback! We are glad you find the paper to be “very well-written” and “pleasant to read”. We hope that the reviewer would find additional experiments requested from other reviewers interesting. Their description could be found in the general response. | Rebuttal 1:
Rebuttal: We thank the reviewers for their feedback and appreciate that they found our paper "well-written", "pleasant to read", and "well-organized". In addition, the reviewers recognized that our "comprehensive ablation study” is “extensively done” and that it brings insights into the method’s performance. In addition, the proposed approach was recognized as “sound” and “effective” and, as a result, yielded strong outcomes.
In addition, we also value the reviewer’s constructive feedback, with requests for additional experiments and ablations. Below, we list the main new experiments that we present in this rebuttal:
- We compare our method with the state-of-the-art image-based DINOSAUR method and several baselines on the YT-VIS dataset showing that VideoSAUR performs better than DINOSAUR on this dataset (**Table R1 in the Rebuttal’s PDF**).
- We provide an extended ablation study on the MOVi-E dataset, where we ablate both the decoder choice and the proposed loss, showing that both are needed for the state-of-the-art performance of our method. In addition, we showed that VideoSAUR significantly outperforms the extension of DINOSAUR to the video domain (**Table R2 in the Rebuttal’s PDF**).
- We study the performance of our method when GT optical flow is used as a target. We show that the motion information of each pixel alone is a worse grouping signal than the proposed temporal similarity of *highly semantic* self-supervised features (**Table R3 in the Rebuttal’s PDF**).
- We investigate our method's performance when no additional datasets are available for self-supervised pertaining. We showed that using MAE pre-trained only on MOVI-E allows VideoSAUR to achieve similar performance to the MAE pre-trained on ImageNet on both MOVi-E and MOVi-C datasets, showing that object-centric bias in ImageNet dataset is not necessary for successful object discovery with VideoSAUR (**Table R4 in the Rebuttal’s PDF**).
- Finally, we visualize and discuss how our method can fail on long videos and discuss potential future work needed to enable our method to work on long videos (**Figure R1 in the Rebuttal’s PDF**).
We are happy that the reviewers recognize the following contributions:
- This paper shows an effective way to use it **[temporal feature similarity] with self-supervised features** and recurrent Slot-Attention (tzwP)
- It incorporates a temporal feature similarity loss that encodes temporal correlations and introduces a motion bias for object discovery (avhj)
- The proposed VideoSAUR model yields state-of-the-art performance on several datasets (L1X2)
In addition, we would like to add:
- **Efficient video architecture integration**: We integrated this loss with the efficient SlotMixer decoder, where this loss synergistically reduces optimization difficulties.
________________________________________________
For convinience we also include the same tables as in PDF below:
**Table R1. Image-based comparison on YouTube-VIS**
| Method | Image FG-ARI |
|------------|-----------------|
| SAVi | 13.2 ± 5.0 |
| STEVE | 25.3 ± 1.8 |
| DINOSAUR | 39.2 ± 0.3 |
| VideoSAUR | 43.1 ± 0.4 |
**Table R2. Extended ablation of VideoSAUR components on MOVi-E**
| Decoder | Loss | FG-ARI | mBO |
|---------|-------------|--------|------|
| Mixer | Feature Reconstruction | 62.3 | 20.6 |
| MLP | Feature Reconstruction | 68.6 | 27.6 |
| MLP | Feature Similarity | 74.5 | 28.8 |
| Mixer | Feature Similarity | 74.1 | 34.1 |
**Table R3. Ablation of VideoSAUR features similarity targets with GT optical flow**
| VideoSAUR | MOVi-C | MOVi-E |
|-------------------------------|--------|--------|
| w/ GT Optical Flow (backward) | 48.1 | 28.9 |
| w/ GT Optical Flow (forward) | 48.9 | 30.1 |
| w/ Feature Similarity | 60.7 | 73.9 |
**Table R4. Self-supervised method (Masked Auto-Encoder) pretraining on MOVi-E**
| VideoSAUR | FG-ARI on MOVi-C| mBO on MOVi-C | FG-ARI on MOVi-E | mBO on MOVi-E |
|---------------------|---------------|------------|---------------|------------|
| w/ MAE+ImageNet | 58.0 | 30.4 | 72.8 | 27.1 |
| w/ MAE+MOVi-E | 59.8 | 27.5 | 70.6 | 23.3 |
Pdf: /pdf/07a4af3890927e3caa9a964c4323a4bfbf6eefb7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A unified framework for information-theoretic generalization bounds | Accept (poster) | Summary: The paper studies the generalization error of the statistical learning algorithms from the information-theoretic point of view. In particular, by leveraging a “decorrelation lemma”, the authors show how various previously established upper bounds on the generalization error, including the common information-theoretic bounds, PAC-Bayes bounds, and chaining versions of them, can be recovered using this decorrelation lemma; and therefore, present a unifying methodology for establishing these bounds.
Strengths: The authors using the introduced decorrelation lemma managed to compactly recover various existing upper bounds on the generalization error. In particular, the method naturally can be combined with the chaining methods to recover a number of results such as the generalization bounds of Zhou et al., Asadi et al., and the bound of Fernique and Talagrand on the expected supremum of a stochastic process. Moreover, the work presents high probability bounds; which is rarely common in the information-theoretic bounds on the generalization.
I believe the decorrelation lemma is a powerful technical lemma using which hopefully, new understandings and bounds can be achieved in future works. For this reason, I think the paper could be of interest to the part of the Neurips audiences that are interested in the theoretical understating of statistical learning algorithms.
Weaknesses: 1. The main weakness of the work is that it remains only at the technical level.
- 1.a. The form of the general bounds, e.g. Theorem 3, does not seem to be easy to compute. It is not clear whether the general results can be leveraged to derive new bounds or to give a new understanding of the generalization error of the learning algorithms (in general or for a particular algorithm). Can authors provide cases where the results of this work give “new” and “simple” or "practical" results? I think the authors would agree that the justification of how coupling in Theorem 3 can be used to show that the expected generalization error is zero when $P_{W|S}=P_W$ is not convincing enough to show the utility of that.
- 1.b. The only cases where these general results give concrete results are when they recover previous results (e.g. Information-theoretic bounds or PAC-Bayes ones) but only with the worst coefficients.
- 1.c. Moreover, even at the intuition level, it is not clear if this work, and more specifically, this decorrelation lemma, gives any new “understanding" of the generalizability of the learning algorithms. Does this work, and this decorrelation lemma, can provide any new intuition/understanding?
For these reasons, unfortunately, the paper remains only on the “technical level” and we can only hope that this interesting tool can be used for establishing new concrete results/understandings.
2. The paper lacks a proper literature review and comparison with other works. Indeed, only 25 works have been cited; most of which are purely listed in the very short introduction. The authors are needed to widen the comparison of the current work with previous literature on generalization error (including different information-theoretic approaches, PAC-Bayes ones, compression-based ones, etc.) to make a clear picture of where this work/tools stand.
3. As an example of the previous point, perhaps, as claimed by the authors, one of the main advantages of this work is the unifying methodology for deriving various existing bounds. This is indeed interesting, as has been done in some other works before. Non-exhaustive examples of some of these works, sorted by year, are: “PAC-MDL Bounds” (Blum and Langford 2003), arXiv:1912.01439, arXiv:2006.13057, arXiv:2105.01747, arXiv:2110.11216, arXiv:2111.05275, arXiv:2203.02474, arXiv:2303.05369. Although some of these works are less directly related as they consider different approaches (and some maybe are more directly related), I believe it is helpful to put this work in this context as well: how does the unifying approach of this work differ from previous attempts? What are the similarities? What are the “limitations” of those works and this one?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The main ones are mentioned above. Some other minor typos/comments:
- Although trivial but K and C are not introduced in line 93.
- In lines 120-121, it is claimed that “the right-hand side can be further upper-bounded in terms of the information divergence $D(P_{Y|X}\| Q_Y)$ using Proposition 1.” Can it be done for any function $f(y)$?
- Typo in line 141: $\frac{|gen(w,S)|}{\sigma \sqrt{6/n}}$
- The proof of Theorem 5, or a comment on how it is derived seems to be missing.
- There are typos in the proof of Proposition 1; after line 367, the LHS should be $\frac{d\mu}{d\nu}\log(\frac{d\mu}{d\nu}+1)$. A similar typo exists on the first line after 368.
- The term “f” seems to be missing in the last line of equations after line 379.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Mentioned above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding the decorrelation lemma: both our decorrelation lemma and the Donsker-Varadhan lemma use convex conjugate pairs to bound a product or an expectation of a product. The main difference is that, when using the decorrelation lemma, we can work with some functional of the density ratio $d\mu/d\nu$, which is a more primitive object and can be further relaxed into divergence-like quantities.
- In lines 120-121, it can be done for any $f$ satisfying appropriate moment constraints, although Hölder’s inequality will be needed in general before invoking Proposition 1. We remark, though, that this statement should not be taken as a theorem, but as part of an informal discussion illustrating the use of the decorrelation lemma.
- The proof of Theorem 5 relies essentially on the same argument as the one used to prove Theorem 4. We will clarify this in the final version.
- Thank you for catching the typos in the proof of Proposition 1. | Summary: This paper describes some steps of the standard formula to obtain generalization error bounds: (i) decoupling of the joint distribution + (ii) chaining. Then, they use this formula contributing mainly on the first front by deriving new decoupling results based on Orlicz $\psi\_p$-norms that can be cast into the standard decoupling results based on the relative entropy.
Additionally, they also include a section describing results based on couplings between the hypothesis' distribution obtained with the algorithm's kernel evaluated on a training set and the hypothesis' marginal distribution. These are used to develop their bounds on chaining, but they can stand alone independently.
Finally, the authors note that their results can be extended to high-probability PAC-Bayes bounds and that these can be used to also recover bounds of the type of Fernique's and Talagrand's bounds on the expected supremum of a stochastic process.
Strengths: The paper is generally well-written and easy to follow. It does a good work summarizing how to derive generalization error bounds using Orlicz norms and, therefore, using the relative entropy of the posterior hypothesis distribution $P\_{W|S}$ with respect to a prior $Q\_W$.
Their decoupling result (or "decorrelation lemma") is interesting and they gain clarity and interoperability by relaxing the tighter standard result they could have obtained with Young's inequality. I believe that Lemma 1 is the main result of the paper, which is later interpreted and contextualized into the generalization error set-up.
It is good that the presented bounds can be brought down (up to constants) to known bounds in the community. This is usually a necessary "goodness test" to know if the results obtained are valuable. Particularly, it was good to see that Fernique's and Talagrand's bounds were recoverable this way.
[*Even though the strengths section will be shorter than the weaknesses, I believe this is a good paper. The authors should note that this will be the case only to help them improve the manuscript and to expand on each of the weaknesses so it can be easily addressed.*]
Weaknesses: While their "decorrelation lemma" is novel and interesting, the applications of this lemma afterwards for the estimates using couplings, the usage of chaining in the space of measures, and the tail estimates are standard and shown otherwise with different decoupling lemmas (see e.g. [7,9,11,15,16,17,18] of the references of their own paper). While probably not on purpose, the paper seems to position itself as if it is introducing this set of steps to obtain this kind of bounds. This is not accurate, as these procedures have been used and done previously, just with a different (although maybe less general in some situations) decoupling lemma. I believe it would benefit the paper to make this clear both in the introduction and at the start of each of these sections.
The paper obtains results based on the inverse of an Orlicz norm of the Radon-Nikodym derivative of the posterior with respect to the prior. While this seems to be more general, the paper only gives examples of this instantiating into relative entropy-based bounds. In the end, it does not provide any example of anything new to gain by using their framework with respect to what was already known. This is not super problematic, but when the constants of the bounds are deteriorating under this framework, it is good to show that it can produce new results that lead to interesting conclusions.
Similarly to the comment above, the paper does not give examples or motivation to many of their bounds. For instance:
- Why are the bounds using couplings useful? The paper mentions the fact that when the algorithm ignores the data, i.e. $P\_{W|S} = P\_W$ a.s. and the prior $Q\_W$ is chosen to be the real marginal of the hypothesis $P\_W$, then the resulting bound is exactly zero, which is not the case for the bounds presented in Section 4. However, the bounds in Section for are for the expected *absolute* generalization error, where the extra term $O(1/\sqrt{n})$ is hard to avoid (or unavoidable in many cases). In Section 5, the bounds are for the expected generalization error instead, where the standard bounds (e.g. from [7]) already achieve a zero generalization error with this set-up.
- Why are the bound using chaining useful? The paper basically uses the same techniques as [17] with their "decorrelation lemma". In [16,17] they come up with a slightly artificial example to showcase why their results are useful when compared to previous bounds. It is unclear if this technique provides any practical advantage to previous results, either in terms of obtaining bounds with a better rate in certain settings, in terms of interpreting the resulting bounds, or even in terms of simplicity to obtain actually computable bounds.
The paper does not discuss some relevant literature in certain parts of the text:
- In line 102 they state that they "define the conditional divergence [...]". This is a standard notation for the conditional divergence, sometimes credited to Verdú, although I am unsure which is the origin. Probably this is just a writing thing, but it contrasts with the previous introduction of the relative entropy where they introduce it with "[...] is defined as".
- Even though the techniques employed are different, some mention to [A] would be interesting in the introduction and/or in the preliminaries. In this paper, they obtain PAC-Bayes bounds considering Orlicz norms with general Orlicz function, not necessarily $\exp(x^p) - 1$.
- In Section 5, it would be interesting to compare and/or mention other works that deal with the couplings of the posterior and the prior to bound the generalization error, e.g. [B,C,D]. In [C], for instance, they also mention the relationship of these bounds with chaining in Appendix A and with the relative entropy and subgaussian conditions in Appendix B.
- In Section 7, it would be good to compare and discuss the work in [E] regarding chained generalization error bounds.
**Additional References**
[A] Amedeo Roberto Esposito, Michael Gastpar, and Ibrahim Issa. "Generalization Error Bounds Via Rényi, f-Divergences and Maximal Leakage". IEEE Transactions on Information Theory. 2021.
[B] Hao Wang, Mario Diaz, José Cândido S. Santos Filho, and Flavio P. Calmon. "An Information-Theoretic View of Generalization via Wasserstein Distance". IEEE ISIT. 2019.
[C] Borja Rodríguez-Gálvez, Germán Bassi, Ragnar Thobaben, and Mikael Skoglund. "Tighter Expected Generalization Error Bounds via
Wasserstein Distance". NeurIPS. 2021.
[D] Ron Amit, Baruch Epstein, Shay Moran, and Ron Meir. "Integral Probability Metrics PAC-Bayes Bounds". NeurIPS. 2022.
[E] Eugenio Clerico, Amitis Shidani, George Deligiannidis, and Arnaud Doucet. "Chained Generalisation Bounds". COLT. 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I believe in the equations after line 379 there is an $f$ missing in the last equality.
- I don't really follow the set of inequalities after line 381. Could you please clarify it?
- Can (how does) this framework incorporate other advances in generalization error bounds based on mutual information like the single sample bounds from [8], [F], [G] or the data-dependent bounds from [H], [12], [F]?
- How can we interpret the provided bounds? Can new results be obtained from them other than recovering the standard relative entropy-based bounds? Could you give some examples where the abstraction to Orlicz norms is beneficial to the analysis?
**Additional References**
[F] Borja Rodríguez-Gálvez, Germán Bassi, Ragnar Thobaben, and Mikael Skoglund. "On Random Subset Generalization Error Bounds and the Stochastic Gradient Langevin Dynamics Algorithm". ITW. 2020.
[G] Ruida Zhou, Chao Tian, and Tie Liu. "Individually Conditional Individual Mutual Information Bound on Generalization Error". IEEE Transactions on Information Theory. 2022.
[H] Jeffrey Negrea, Mahdi Haghifam, Gintare Karolina Dziugaite, Ashish Khisti, and Daniel M Roy. "Information-theoretic generalization bounds for SGLD via data-dependent estimates". NeurIPS. 2019.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations of the paper are not generally pointed out in the paper (e.g. some of the questions and weaknesses above are not discussed such as how the extension to Orlicz spaces gives any advantage to the standard relative entropy and the decoupling lemma using Donsker-Vardhan).
Regarding potential negative societal impact, there is no mention but there is no need to be as this is theoretical, fundamental research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: It is not the intent of our work to cover all existing generalization error bounds or even to replace existing approaches (many of which, as this review correctly points out, make use of various decoupling lemmas). Most of the additional references listed in the review are indeed very relevant, and we will cite and contextualize them in the final version.
Our decorrelation lemma can be considered as an alternative to the Donsker-Vardhan lemma that allows us to work directly with the tails of Radon-Nikodym derivatives (or density ratios), rather than with various expected values, such as the relative entropy. Both the $\psi_2$ function and the relative entropy arise naturally whenever one can engineer a subgaussian condition for a properly chosen centered process, say, by symmetrization techniques. This is often possible with minimal assumptions. However, under other tail conditions, it should possible to use a similar approach based on estimating Legendre-Fenchel conjugates to come up with different decorrelation lemmas to cover more general scenarios.
The benefit of using couplings is that we can actually relate the quantities on the upper bounds to an optimal transportation problem. Specifically, a common quantity in our bounds that rely on couplings is of the form ${\bf E}[C(U,V)R(\mu,\nu)] + {\bf E}[C’(\bar{U},\bar{V})]$, where $(U,V),(\bar{U},\bar{V})$ are random pairs in the hypothesis space; $C, C’$ are some cost functions, and $R$ is some functional of the density $d\mu/d\nu$; and $\mu$ is data dependent, while $\nu$ is not. Observe that the second term is the standard transportation cost under the law of $(\bar{U},\bar{V})$, while the first term can be thought of as information-weighted transportation cost due to the presence of $R(\mu,\nu)$, especially if we take $\nu$ to a marginal of $\mu$ (after averaging out the data). Thus, we can interpret $R$ as a functional of the ``information gain" due to observing the data. We did not mention this perspective in the paper because the cost function would be somewhat exotic if we were view it literally in the context of a transportation problem. However, we do mention that the choices of couplings and priors are flexible, such that we may find an optimal coupling and prior for some particular cases.
Now we briefly illustrate the benefit of using chaining in the space of measures. We consider the term ${\bf E}[C(U,V)R(\mu,\nu)]$. When the posterior $\mu$ is not absolutely continuous w.r.t. the prior $\nu$, the functional of the density $d\mu/d\nu$ becomes vacuous. However, if we use chaining, then the term containing the density ratio is weighted by a distance-like quantity as long as we make sure that the entire summation (such as the summation on the RHS of Theorem 4) is finite. This is a much weaker assumption than $\mu \ll \nu$. A relevant example is the Dudley entropy integral where we also have this multiplicative form $\epsilon \sqrt{\log N(\epsilon)}$. Here $\log N(\epsilon)$, the epsilon-entropy, will generally become infinite as $\epsilon$ goes to zero, but this will not be an issue as long as the integral is finite.
Some comments and answers to the questions:
- We were certainly not claiming the definition of conditional divergence to be original; it is standard in the information theory literature, going back at least to the text of Csiszár and Körner. We should have been more consistent in using the passive voice (''is defined as" rather than
''we define it as").
- There is indeed a missing $f$ after line 379.
- Clarification of the chain of inequalities after line 381: the first inequality uses the definition of $E$ to replace $g$; the second inequality uses the fact $1 \le \langle \nu, \exp(g^p) \rangle$; the third inequality is due to $(a+b)^{1/p} \le a^{1/p} + b^{1/p}$ for $a, b \ge 0$.
- Partial answers to the last two questions are addressed in the previous paragraphs.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: Thank you for your rebuttals (both the individual one and the general one). Let me expand further on some things that remain unclear to me.
1. You mention that the decorrelation lemma allows you to work with the tails of the Radon-Nikodym instead of various expected values, such as the relative entropy. It is true that $D(\mu \Vert \nu) = \mathbb{E}\_\mu \log \frac{d\mu}{d\nu}$, but you also deal with $\mathbb{E}\_\mu f \psi\_p^{-1} \frac{d\mu}{d\nu}$. Hence, I do not agree with that statement.
2. Continuing with the point above, I agree that working with an Orlitz norm gives you more generality than working with the relative entropy and that a better understanding of the tail of $\frac{d\mu}{d\nu}$ can be gained in certain situations. However, I am still missing some examples of this. Could you give some example of some "common" tail behavior where the abstraction to an Orlitz norm provides some advantage? I believe this would improve the paper substantially.
3. Regarding the couplings. I like this interpretation, I believe it should be mentioned in the paper to give some intuition. I still do not see there is a practical benefit on its own, but I like it conceptually. Also, I enjoyed that you can use it together with chaining to obtain Wasserstein-2 based bounds without a Lipschitz assumption. Could that be done without the need of chaining?
* I still believe that in Section 5 it should be acknowledged that the fact that bounds that go to 0 can be obtained is not from the fact that one is using couplings, but to the fact that one is bounding the generalization error instead of the absolute generalization error.
4. Regarding chaining. Again, I like the bound presented in the general rebuttal, it is conceptually nice. I understand that the successive refinements will make the bound not to go to infinity in certain situations. Also, I appreciate Dudley's metric entropy to bound the suprema of processes. \
However, we have applications of the metric entropy to real scenarios. For instance, we can study the Wasserstein concentration of the empirical measure to the true measure, and obtain a true rate of $\mathcal{O}(1 / \sqrt{n})$. On the other hand, I do not see how this can be used for the generalization error of a learning algorithm. The examples given in [16,17] are quite artificial. Could you give some example where we can see some application of this bound to a learning problem?
---
Reply to Comment 1.1.1:
Title: Thank you for the additional questions.
Comment: 1. The presence of the nonnegative function $f$ allows for incorporating information about the tail behavior of $d\mu/d\nu$, e.g., we could take $f = {\bf 1}_{\lbrace d\mu/d\nu \ge e^r \rbrace}$ for some threshold $r$, or we could use some other indicator of a tail event. The expectation ${\bf E}[f \psi^{-1}_p(d\mu/d\nu)]$ could be further upper-bounded using Hölder's inequality. We have not had the occasion to use arguments of this form in the paper, but we interpreted the original question about the decorrelation lemma vs. Donsker-Varadhan in general terms, not just in the context of our work. Moreover, the tail estimates in Section 7 are an illustration of how the decorrelation lemma can be used beyond expectation values (cf. the first paragraph of Section 7).
2. Given the space and time constraints, we don't think we could come up with an example right now that would not be contrived in some sense. Overall, as we have emphasized in our original rebuttal, we do not subscribe to the idea that every new paper on generalization bounds is in ''competition'' with earlier work and thus has to demonstrate some ''advantage'' in terms of the concrete examples it can handle that previous work could not. We do, however, believe that our approach has certain conceptual advantages over some of the earlier work; in particular, many of the existing results that have relied on tailor-made constructions can now be explained in a more transparent way.
3. The benefit of the Wasserstein-2 bounds is that they can be combined with chaining. By letting $K=1$, we have one term instead of summation of several term.
- Sorry about the confusion. You are right that the reason we can obtain $0$ for the example in Sec. 5 is that we are bounding the generalization error instead of the absolute generalization error. However, what we would like to emphasize is that using coupling can remedy the issue of the extra term that arises from our use of the decorrelation lemma instead o decoupling using Donsker-Varadhan. For example, in Theorem 1, without coupling the best thing we can do to bound the generalization error is to first bound it with the absolute generalization error, in which case a term one the order of $1/\sqrt{n}$ will always be present.
4. Since computing or estimating covering numbers is a much easier task, Dudley’s entropy integral is usually more practical to use. However, the Talagrand-Fernique bound on the suprema of subgaussian processes in terms of majorizing measure is sharp, unlike Dudley’s entropy integral. Our bound treats the supremum as expectation with respect to a random measure and thus recovers the Talagrand-Fernique majorizing measure bound. The key idea here is that we can derive both the bounds of [16] and [17] and the (sharp) Talagrand-Fernique bound using our methodology. This suggests that we can handle less artificial situations than the ones considered in [16] and [17] by building on our approach together with that of [18]. Again, given the constraints of space and time, we feel that this is better left for future work. | Summary: The paper presents a unified perspective of information-theoretic generalization bounds through decorrelation and coupling/chaining. Various existing generalization bounds in the literature are recovered or generalized via this perspective. The Fernique-Talagrand upper bound on the expected supremum of subgaussian processes also emerge as a special case from this framework.
Strengths:
I did not check carefully the proofs in the supplementary materials, but the presented results look sound to this reviewer. The paper offers a good service to the learning theory community for understanding information-theoretic generalization bounds and key techniques therein for their development. The development in this paper is original, to this reviewer.
The paper is also clearly written. I appreciate the authors' informal summary in section 1.1, making the paper easy to read.
Weaknesses: On the minor side, the work appears to be limited only to unification, without aiming at developing novel and/or tighter bounds using the developed tool, despite that some bounds are stated in more general forms (e.g., Theorem 7) and some are shown to have certain improvement (e.g. Theorem 3). It would be much better appreciated if the authors can further demonstrate the power of their framework by presenting novel bounds of greater significance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In the discussion after Theorem 3, the authors demonstrated the advantage of the theorem in the trivial case where W and S are independent. Are there more interesting (i.e. less trivial) cases where such an advantage can be demonstrated?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: For Theorem 3: if the output of the algorithm does depend on the data, we have to choose suitable couplings and prior, such that the sum of the two terms on the right-hand side of (12) is minimized. We doubt that there exists a single procedure for finding such optimal choices in general cases (or even in ``less trivial cases''), but this is definitely an important direction for future work. Also, we use Theorem 3 to obtain results beyond the independent case. For example, via Corollary 3, Theorem 3 leads to the results of Section 6 (Theorem 4 and 5), which make nontrivial use of couplings beyond the case of independent $S$ and $W$.
---
Rebuttal Comment 1.1:
Title: Thank you for the response.
Comment: I am keeping my rating. | Summary: This paper proposes a unified framework for deriving information-theoretic generalization bounds for learning algorithms. The main technical result relies on a probabilistic decorrelation lemma based on a change of measure and Young’s inequality in $L_{\psi_p}$ Orlicz spaces. Combining it with other techniques, such as summarization, couplings, and changing the space of probability measures, new upper bounds on the generalization error can be obtained both in expectation and in high probability. The proposed framework also recovers many of the existing generalization bounds as special cases, including the ones based on mutual information, conditional mutual information, stochastic chaining, and PAC-Bayes inequalities. Strength: Lemma 1 provides a very general way to derive generalization error bounds in multiple setting, and it recovers many existing results as special case.
Strengths: The decorrelation Lemma 1 provides a very general way to derive generalization error bounds in multiple settings, and it recovers many existing results as a special case.
Weaknesses: As the proposed bounds are not based on standard information measures, is there a way to evaluate these quantities even in some illustrative examples, like the mean estimation problem considered in [8] and [17]?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the proposed framework recover the ICIMI bounds obtained in the following paper? In general, is it compatible with the individual sample technique used in [8] and [12]?
Zhou, Ruida, Chao Tian, and Tie Liu. "Individually conditional individual mutual information bound on generalization error." IEEE Transactions on Information Theory 68, no. 5 (2022): 3304-3316.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation has been addressed well in the confusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Since our decorrelation lemma can be used as an alternative to Donsker-Varadhan, it should be possible to obtain the ICIMI bounds using our approach since a key lemma used to prove these bounds also makes use of Donsker-Varadhan. Moreover, it should be possible to obtain the results of [8] and [12] proved using the approach of [14], which also relies on Donsker-Varadhan and random subsampling of the training data. The same technique as we used in our proof of the original CMI bound of Zakynthinou and Steinke could then be used to recover the main results of [14] (possibly with different constants).
---
Rebuttal Comment 1.1:
Title: Thank you for the response.
Comment: I believe the paper will benefit from discussing some simpler examples, and I will keep my score. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their detailed and careful reviews. In fact, we wish all of NeurIPS reviews adhered to such a high standard! While we address the points raised by each reviewer in individual rebuttals, we would like to clarify a few points that were common to all the reviews.
The literature on information-theoretic generalization bounds is rather extensive and continues to grow. We thank the reviewers for mentioning a number of works we should have cited; we will do so in the final version and discuss their relation to other work whenever possible. We neither believe nor claim that all existing results can be unified under a single framework. The unification we speak of in our work is to ``effectively combine the information-theoretic approach with the classical framework based on various measures of complexity of the hypothesis class" (lines 29-30 in our paper). This can be (and has been) done in different ways; we believe our approach is valuable because it rests on a couple of simple but very flexible ingredients and because it seamlessly interpolates between classical bounds of Fernique and Talagrand and more recent information-theoretic results.
Also, here we would like to provide an example we did not include in the paper due to space limitations. This example may help to provide more intuition about our results, specifically with respect to the use of couplings as in Theorem 3.
Assume the loss function satisfies (13) as in Theorem 5. We take $Q_W = P_W$, take $P_{W_k|S}$ to be some points on the geodesic with respect to Wasserstein-2 distance with endpoints $P_{W|S}$ and $P_W$, and take $\rho_{W_kW_{k-1}}=P_{W_kW_{k-1}}$. Then there exist data-dependent couplings $P_{W_k W_{k-1}|S}$, such that
$$
{\bf E}[{\rm gen}(W,S)] \lesssim \frac{1}{\sqrt{n}}\left( {\bf E}[W_2(P_{W|S}, P_W)] + \sum_{k=1}^K {\bf E}[W_2(P_{W_k|S}, P_{W_{k-1}|S})\sqrt{D(P_{W_k W_{k-1}| S}|| P_{W_k W_{k-1}} )}]\right).
$$
Observe that the first term is the expected Wasserstein-2 distance between the posterior and the prior and the second term is a sum of ‘’divergence weighted’’ Wasserstein distances. Also note that the form of the second term is in the spirit of the Dudley entropy integral, where the Wasserstein distance corresponds to the the radius of the covering ball and the square root of the divergence corresponds to square root of the metric entropy. We believe such a bound, where the Wasserstein distance appears without Lipschitz assumption on the loss function, is new. As far as we can tell, it is not covered by references [B], [C], and [E] brought up by Reviewer 7JF4. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Riemannian Projection-free Online Learning | Accept (poster) | Summary: The authors present online algorithms for geodesically convex losses on Riemannian manifolds. Their algorithms do not call the expensive operation of projection onto a feasible set. Instead of projection, they rely on two oracles to provide a direction of descent: a separation oracle and a linear oracle. Both require an extension of the concept of hyperplanes from Euclidean space to the manifold, for which they use an inverse exponential map. This map transforms points on the manifold into tangent space vectors, enabling operations on the manifold without the need for explicit projection operations. In addition, they also consider the projection onto a geodesic ball, which is computable with high accuracy and thus effectively projection free. Their algorithms give adaptive regret guarantees which are sublinear in the horizon.
Strengths: - The authors' contribution is significant as they address the challenge of projecting onto a Riemannian manifold, which is computationally expensive and can lead to geometric distortions. This area of research seems relatively less explored.
- The writing style is clear and they provide a comprehensive background on Reimannian geometry, making it easy for a read to grasp the problem and motivation.
Weaknesses: As I am not familiar with this area, I am unable to identify any weaknesses or limitations.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I am not sure why $\tilde{\mathcal{K}} = (1-\delta) \mathcal{K}$ is considered as the feasible set for the separation oracle.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have acknowledged the limitations of their work, specifically the absence of a membership oracle, and have discussed potential enhancements to their algorithms. These improvements include utilizing the separation oracle for strongly convex losses, reducing the reliance on the number of calls regarding the set's diameter and a faster method to optimise the objective in the linear oracle.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort you put into understanding our work. Your supportive and constructive review means a lot to us. Below, we have provided answers to your specific questions and addressed your concerns.
> I am not sure why $\tilde{\mathcal{K}} = (1-\delta) \mathcal{K}$ is considered as the feasible set for the separation oracle.
Thanks for your question. The main reason is we want to make sufficiently large progress for each call to the separation oracle. By Lemma 2, we need a separating hyperplane $-\left<\text{Exp}_y^{-1}z,g\right>\geq Q$ and the progress at each call is on the order of $O(Q^2)$. A separating oracle may generate a separating hyperplane in the form of $-\left<\text{Exp}_y^{-1}x,g\right> > 0$ for any $x\in\mathcal{K}$, but the corresponding $Q$ could be arbitrarily small, even as low as $T^{-100}$. This could lead to a significant number of oracle calls. By computing an infeasible projection onto $\tilde{\mathcal{K}} = (1-\delta) \mathcal{K}$, we can achieve projection-free online learning with a more reasonable $O(T)$ oracle calls. This idea is briefly outlined in our draft (Lines 218-223).
---
Rebuttal Comment 1.1:
Comment: This was helpful, thanks.
---
Reply to Comment 1.1.1:
Title: Response
Comment: You are welcome. Thank you for your positive evaluation of our work! | Summary: The paper considers Riemannian online optimization problems over sets of constraints and tries to tackle them by avoiding projections onto the constraint set. This is an already established line of research in Euclidean optimization and the results follow the structure of Garber and Kretzu (2022). The results of the latter are adapted to the Riemannian setting using well-known geometric bounds.
Strengths: The paper is concerned with problems of profound importance for the neurips community. It is well-written and can be followed easily by people with reasonable background in Riemannian optimization. The use of geometric bounds (law of cosines in negative curvature, spread of Jacobi fields) is clearly explained. The authors pay special attention in computability issues behind the separating and linear oracles used, building upon the contributions of Weber and Sra (2022b). The convergence guarantees much the ones of the Euclidean setting in Garber and Kretzu (2022) up to constants.
Weaknesses: I am not thrilled by the level of originality since the paper follows closely the one by Garber and Kretzu (2022) and used geometric bounds that are well-known in the Riemannian optimization community for quite some time. Still, it is a clearly-written concrete contribution to a problem of interest, thus I think that publication in the conference is totally justifiable.
There are some points in the paper that I find peculiar as I specify in my questions' section. This is the main reason that I give a score only slightly above the acceptance threshold (which I will revisit in case the authors answer me concerns convincingly).
I would advise the authors to revisit the title of the paper. When I see "On ..." I automatically form the impression that the paper features some vague discussion about a research topic without really useful contributions, which is not the case here.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. I see some slight discrepancy in the selection of parameters and worst-case regret bounds between this paper and Garber and Kretzu (2022) (I will refer to that as GK from now on). Could you explain why this is happening? To be more specific:
Theorem 1 in this paper and Theorem 6 in GK do not have exactly the same constants. I think you choose $c=1/2$ and $c_1=4$, but I cannot see how the regret bound and the bound in the number of oracle calls become the same for $\zeta=1$.
The parameters $\eta$ and $\delta$ in Theorem 2 of this paper are not chosen the same way as the ones in Theorem 7 of GK with respect to $T$. This is quite worrying.
Theorem 3 in both papers have the same parameters (for $\zeta=1$), but the regret bound features slightly different constants.
Discrepancies like that seem not to be the same among Lemmas 1 and Theorems 4 between the papers.
2. I struggle to understand Algorithm 5, I think that after the if statement, you should have $< \exp^{-1}_{x_i}(y),\exp^{-1}_{x_i}(v_i) > \leq \epsilon$ or $d(x_i,v_i)^2 \leq 3 \epsilon$. Is that true?
3. As I see the results, it should pretty straightforward to have similar ones for manifolds of positive curvature (with $\zeta=1$ and $\bar r=r$). Is there any specific reason that you didn't attempt that?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors adequately discuss some limitations of their work in the conclusion of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your detailed and constructive feedback. Your specific questions and concerns have been addressed as follows.
> I see some slight discrepancy in the selection of parameters and worst-case regret bounds between this paper and Garber and Kretzu (2022) (I will refer to that as GK from now on). Could you explain why this is happening?
Thanks for the question on the discrepancy between our paper and Garber and Kretzu (2022). We clarify these concerns item by item as follows.
1) The discrepancy between our Theorem 1 and Theorem 6 of Garber & Kretzu (2022) might stem from a misunderstanding. Upon closer examination, we can observe that Theorem 6 of Garber & Kretzu omitted the dependence on $c_1$, so its guarantee can be rewritten as:
$$
\text{Reg}\_T\leq\left(GRc+\frac{Gc_1r}{4}+\frac{4GR^2}{c_1r}\right)\sqrt{T}.
$$
In our work, the parameters correspond to $c=\frac{1}{2}$ and $c_1=\frac{4R}{r}$. By substituting these values into Theorem 6 of Garber & Kretzu (2022), we obtain:
$$
\text{Reg}\_T\leq\left(GRc+\frac{Gc_1r}{4}+\frac{4GR^2}{c_1r}\right)\sqrt{T}=\frac{5GR\sqrt{T}}{2}.
$$
This result matches the guarantee of our Theorem 1 when setting $\zeta=1$. Similarly, we can compute the number of oracle calls in Theorem 6 of Garber & Kretzu (2022):
$$
N_{calls}\leq \left(\frac{c_1R}{cr}+\frac{c_1^2}{4c^2}+1\right)T=\left(\frac{8R^2}{r^2}+\frac{16R^2}{r^2}+1\right)T.
$$
This is in agreement with our result, once again, when setting $\zeta=1$ and $\bar{r}=r$.
2) We appreciate your diligence in pointing out the apparent difference between our Theorem 2 and Theorem 7 of Garber & Kretzu (2022). This discrepancy was due to a typographical error. The correct parameters were used in the proof of Theorem 2 (Line 524). We will fix this in the revised version.
3) The discrepancy between our Theorem 3 and Theorem 3 of Garber & Kretzu (2022) is due to numerical rounding. The regret guarantee of Theorem 3 in our paper is
$$
\text{Reg}_T\leq GR\left(\left(\frac{5}{2}\zeta^2+\sqrt{180}\zeta+\frac{4}{\zeta}\right)T^{\frac{3}{4}}+20T^{\frac{1}{2}}\right).
$$
After choosing $\zeta=1$, we find $\frac{5}{2}\zeta^2+\sqrt{180}\zeta+\frac{4}{\zeta}=19.9164\approx 20$, which leads to the same result as Theorem 7 of Garber & Kretzu (2022):
$$
\text{Reg}_T\leq 20GRT^{\frac{1}{2}}+20GRT^{\frac{3}{4}}.
$$
> I struggle to understand Algorithm 5, I think that after the if statement, you should have $< \text{Exp}^{-1}_{x_i}(y),\text{Exp}^{-1}_{x_i}(v_i) > \leq \epsilon$ or $d(x_i,v_i)^2 \leq 3 \epsilon$. Is that true?
Thanks for pointing out this. Sorry there is a typographical error and this line verifies if $\left<\text{Exp}_{x_i}^{-1}y,\text{Exp}_{x_i}^{-1}v_i\right>\leq\epsilon$ or $d(x_i,y)^2\leq 3\epsilon$ holds. If $d(x_i,y)^2 \leq 3 \epsilon$ holds, then we have completed the necessary tasks because $x_i$ is sufficiently close to the target point $y$. And if $\left<\text{Exp}_{x_i}^{-1}y,\text{Exp}_{x_i}^{-1}v_i\right>\leq\epsilon$ holds, by the definition of $v_i$, we have
$$
\left<\text{Exp}_{x_i}^{-1}y,\text{Exp}_{x_i}^{-1}x\right>\leq \left<\text{Exp}_{x_i}^{-1}y,\text{Exp}_{x_i}^{-1}v_i\right>\leq \epsilon
$$
holds for any $x\in\mathcal{K}$. This relationship can be leveraged to construct a separating hyperplane between $y$ and $\mathcal{K}$, as outlined in Lines 542-546.
> I would advise the authors to revisit the title of the paper.
>
Thanks for your thoughtful suggestion. We will be more than happy to remove the word "On" in the title.
> As I see the results, it should pretty straightforward to have similar ones for manifolds of positive curvature (with $\zeta=1$ and $\bar r=r$). Is there any specific reason that you didn't attempt that?
_Repeated from response to Reviewer 4T7s, Comment 2, with brief revisions._
It is not too difficult to generalize the results in our paper to manifolds of positive curvature and even to CAT$(\kappa)$ spaces. To achieve this extension, the following adjustments would be necessary in our manuscript:
1. We may assume that the sectional curvature lies in the range $[\kappa,K]$. In this context, we can replace Lemma 30 in our draft with Corollary 2.1 from Alimisis et al. (2020). When $K>0$, we must further assume that the diameter of the decision set is upper-bounded by $\frac{\pi}{\sqrt{K}}$.
2. Lemma 3 would need to be revised to recompute the Jacobi field, taking into account the sectional curvature of the manifold. Notably, in the case of manifolds with positive curvature, an initial computation provides $\bar{r}=O\left(\frac{\sin(\sqrt{K}(R+r))}{\sqrt{K}(R+r)}\right)\cdot r$, a relation that does not exhibit exponential dependence on $(R+r)$.
3. The separation theorem, specifically for Hadamard manifolds as outlined in Silva Louzeiro et al. (2022), would require generalization to CAT$(\kappa)$ spaces. This change ensures that the separation oracle remains well-defined.
We discovered that delving into all of these details did not offer additional insights and might have actually diminished the readability of the text. Therefore, we chose not to include them in the current version. However, we recognize the importance of this matter and will provide further discussions on this subject in the revised version.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarifications. I am satisfied by them and increase my score by one point.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you so much for the helpful feedback and positive evaluation! | Summary: The paper focuses on constrained Riemannian online optimization on the Hadamard manifold. Existing Riemannian online optimization methods often require projections, which present computational complexity challenges in high-dimensional settings. To address this issue, the authors have developed a projection-free Riemannian online optimization structure, implemented under two scenarios - the separation oracle and the linear optimization oracle.
Initially, they explore the Infeasible R-OGD and Infeasible Riemannian bandit algorithm under a separation oracle, for which the regret bounds are proved to be $\mathcal O(\sqrt T)$ and $\mathcal O(T^{\frac{3}{4}})$ respectively for geodesically convex functions. Furthermore, they consider the Block R-OGD under a linear optimization oracle, and provide a proof for regret bounds of $\mathcal O(T^{\frac{3}{4}})$ and $\mathcal O(T^\frac{2}{3})$ for geodesically convex and geodesically strongly convex functions respectively. It's noteworthy that all these regret bounds match their respective Euclidean counterparts.
Strengths: - Originality & Significance: Althogh the paper has retricted novelty (discuss in the following), it provides, to my best of knowledge. the first no-regret guarantee for projection-free Riemannian OCO. Also, it is nice to see that all regret bounds match their Euclidean counterparts.
- Clarity & Significance: The paper is well-organized, techinally sound and easy to follow. The assumption is standard in the literature of Riemannian optimization and Riemannian OCO.
Weaknesses: - Retricted novelty: The majority of the analysis leans heavily on the Euclidean Analogue and Jacobian/Hessian comparison. Although it is the standard structure in Riemannian optimization literature, it would have been enriching to see some novel conceptual ideas that engage more specifically with the geometry of the problem.
- Positive curvature: The study focuses on the Hadamard manifold, known for its non-positive sectional curvature. Conversely, in practical scenarios, there's considerable work undertaken within spaces possessing positive sectional curvature like SO(3). The methodology and algorithms presented in this paper appear, at first glance, to be easily extendable to accommodate $cat(K)$ spaces. However, the authors have not provided any explanation or context for why such positive curvature spaces were not considered in their study. A consideration of such spaces could have added more depth and wider applicability to the research.
- Possible typo: formula about $\bar r $ after l227: there should be a additional $r$ at RHS.
Algorithm 5: $y$ seems to be a point rather than a tangent vector.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I'd appreciate if the authors could expound on their primary insights in this paper. Specifically, it would be beneficial to understand the significant challenges encountered while extending the Euclidean projection-free algorithm to the Riemannian manifold.
- Additionally, I'm interested in knowing whether the proposed Riemannian projection-free method could be easily extended to $cat(K)$ spaces. If this is not feasible, could the authors shed light on the primary hurdles that prevent such an extension?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and helpful feedback. We will revise typographical errors accordingly. We have addressed your specific questions and concerns below.
> I'd appreciate if the authors could expound on their primary insights in this paper. Specifically, it would be beneficial to understand the significant challenges encountered while extending the Euclidean projection-free algorithm to the Riemannian manifold.
_Repeated from response to Reviewer xiK4, Comment 4, with brief revisions._
Thank you for the question. Our work builds heavily on Garber & Kretzu (2022) and Wang et al. (2021, 2023), but we have several novel contributions:
1. We identify that the shrinking set $(1-\delta)\mathcal{K}$ is non-convex and subsequently extend the definition of infeasible projection initially presented by Garber & Kretzu (2022) to accommodate non-convex settings.
2. Quantifying the progress made with each call to the separation oracle presents a highly non-trivial challenge. We overcome this in Lemma 3, where we amalgamate Lemma 19 (equivalent to Lemma 45 in Wang et al. (2023)) with meticulous computation involving Jacobi field comparisons.
3. Concerning the linear optimization oracle, deriving a separating hyperplane using Riemannian Frank-Wolfe stands as the principal hurdle. We manage to overcome this through an innovative application of Riemannian cosine laws. For further insights, we refer readers to Remark 2 and the proof of Lemma 5 in our manuscript.
> Additionally, I'm interested in knowing whether the proposed Riemannian projection-free method could be easily extended to CAT$(\kappa)$ spaces. If this is not feasible, could the authors shed light on the primary hurdles that prevent such an extension?
It is not too difficult to generalize the results in our paper to CAT$(\kappa)$ spaces. To achieve this extension, the following adjustments would be necessary in our manuscript:
1. We may assume that the sectional curvature lies in the range $[\kappa,K]$. In this context, we can replace Lemma 30 in our draft with Corollary 2.1 from Alimisis et al. (2020). When $K>0$, we must further assume that the diameter of the decision set is upper-bounded by $\frac{\pi}{\sqrt{K}}$.
2. Lemma 3 would need to be revised to recompute the Jacobi field, taking into account the sectional curvature of the manifold. Notably, in the case of manifolds with positive curvature, an initial computation provides $\bar{r}=O\left(\frac{\sin(\sqrt{K}(R+r))}{\sqrt{K}(R+r)}\right)\cdot r$, a relation that does not exhibit exponential dependence on $(R+r)$.
3. The separation theorem, specifically for Hadamard manifolds as outlined in Silva Louzeiro et al. (2022), would require generalization to CAT$(\kappa)$ spaces. This change ensures that the separation oracle remains well-defined.
We discovered that delving into all of these details did not offer additional insights and might have actually diminished the readability of the text. Therefore, we chose not to include them in the current version. However, we recognize the importance of this matter and will provide further discussions on this subject in the revised version.
---
Rebuttal Comment 1.1:
Comment: I would like to express my appreciation for the detailed clarification provided by the author, especially in terms of the technical novelty and the extension to the CAT($\kappa$) space. This has significantly enhanced my understanding of the work presented.
In light of this, I will raise the score by one point, moving from 6 to 7.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thanks for taking the time to provide helpful comments on the paper and for the upward revision of the evaluation score! | Summary: The authors study online learning on homogeneous Hadamard manifolds with geodesically-convex, Lipschitz losses. They focus on projection-free methods and in particular they develop algorithms that use either a separaton oracle or a linear optimization oracle. They study adaptive regret algorithms for the full information and the bandit feedback settings.
Strengths: Besides from the strengths of having the theoretical results authors have, I want to emphasize the following.
This work usually states clearly the assumptions that is making, even though this could be improved, as I suggested. It was good to see the authors taking into account the non convexity of (1-\delta)\mathcal{K}. Some works ignore this fact, I wonder if they learn about this in the review of a paper. But the authors here are rigorous and work with the possible non-convex set.
The results with linear minimization oracle look fine to me
Weaknesses: # Main points of my review
Several results in this paper follow closely Garber and Kretzu (2022). This is not a criticism, of course there are several challenges in the Hadamard case that need to be surmounted.
I have several points of criticism though:
+ The paper contains some mistakes, and I am not sure if one could fix at least one of them while keeping the stated results (see below).
+ Moreover algorithm 4 has a regret guarantee in Theorem 2 which is non standard (algorithm plays a point and feedback is given for another point but the regret is still measured with respect to the played point). This seems an artifact of the proof, as in it was the notion that made the proof work, but without any other justification for this made-up definition of regret, usefulness of it or an application of it, the result is very weak.
+ Lemmas 3 and 4 yield that the number of iterations of algorithm 2 is exponential on R. This is not desirable. I do not want to criticize this point harshly, since Hadamard is hard and sometimes the geometry makes these exponential dependences appear, and this could be a result deserving its own paper. This dependence also appears in previous Riemannian online learning works. But rather, I just want to encourage the authors, possibly for future work, to try to improve this to a polynomial dependence (see below for a few comments on this).
While I think this works has value, I also think this is not ready for publication. At the very least the errors should be corrected, and the bandit results should not be presented as results (e.g. in the table) unless the authors can justify that the modify notion or regret makes sense. And in any case this should be clearly stated for the readers that only want to firstly check the introduction out.
# Proof Errors
Correct me if I am wrong but I think I found the following errors in your proofs. I can increase my score if the technical errors can indeed be fixed.
+ The proof of Lemma 1 is wrong, although fixable. The points y_{t+1} are not guaranteed to be in B_p(R) and so you cannot use lemma 29 as you stated it. However, if you look closely at Lemma 5 of Zhang and Sra, you will realize that the lemma actually says that \zeta depends on \kappa and c, which a stronger inequality than what you stated (dependence on \kappa and D). Using that stronger statement then you can show in your Lemma 1 that the \zeta in the proof depends on d(\tilde{y}_t, x) \leq D, and so the proof follows.
+ However, your lemma 2 has the same problem, and when you use it in the proof of lemma 4, you use it with the point y_i, which as you point out in L435, it is y_i \not \in \mathcal{K}. Therefore, the geometric deformation is not \zeta (which was defined as depending on D), and this does not seem to be so easily fixable.
# Other comments / suggestions
The results with linear minimization oracle look fine to me.
You should specify in lemma 5, that you always use y\in B_p(R), since you use f(x) = d(x y)^2/2 and you claim that it is \zeta smooth, which you can have because when you use lemma 5 in alg 6 it is always y\in B_p(R). But this is not stated and it was confusing for me.
This work usually states clearly the assumptions that is making, even though this could be improved, as I suggested. It was good to see the authors taking into account the non convexity of (1-\delta)\mathcal{K}. Some works ignore this fact, I wonder if they learn about this in the review of a paper. But the authors here are rigorous and work with the possible non-convex set.
"(pinpointing a compact set that contains the true optimum can improve time complexity) This makes metric projection onto a subset of Riemannian manifold seemingly indispensable, yet this operation is not only computationally taxing but can also lead to unwanted geometric distortion, which can undermine convergence. Certain works have depended on simplifying assumptions to prove results..." I disagree with the message that is given here. For several reasons:
+ Firstly, in many situations, one can "pinpoint" a compact set where the optimum is by just estimating the initial distance to minimizer and imposing ball constraints (and using a doubling trick or other tricks if necessary to avoid assuming knowledge of this distance ). In that case, the operation is "projection-free" according to your own definition in line 178, due to the simplicity of the operation, so not necessarily computationally taxing.
+ Secondly, while it is true that using projections can lead to greater geometric distortion that can undermine convergence (e.g. see https://arxiv.org/pdf/2305.16186.pdf where the authors make projected gradient descent work for smooth functions while quantifying the distortion), several works make use of projections to obtain algorithms that previously had to make the assumption of iterates staying in some set, and at the same time the geometric dependence on the convergence rates does not worsen. See for instance https://arxiv.org/pdf/2211.14645.pdf and theorem 7 in https://arxiv.org/pdf/2305.16186.pdf, and https://arxiv.org/pdf/2111.13263v2.pdf section 6
+ Thirdly, the works that mandate the iterates to be in a feasible set by assumption do so because they do not know how to enforce or guarantee their algorithms are in some feasible set, even if they have access to projections. It is not like the assumption is made to avoid projections that are "computationally taxing and with extra geometric distortions, which can undermine convergence" by using projections. Their techniques do not allow them to use projections and as explained above and more clever algorithms do without extra distortions.
You probably cannot compute \sigma_i in Algorithm 5 in close form. You probably would need a binary search and then have the algorithm be able to account for an error. Is that right?
# Exponential dependence on the diameter
Regarding the exponential nature of the constants in lemma 3 and running time in lemma 4, this phenomenon is very similar to the exponential constants that appear in the deformations of lemma 2 in https://arxiv.org/pdf/2012.03618.pdf However, in that paper they are interested on optimization, and they show that one can get away with this exponential complexity. The trick is a reduction from global optimization to approximately implementing a ball optimization oracle, so that the algorithms would only need to run in balls where R\sqrt{-\kappa} = O(1). The reduction goes as follows: optimize in one such constant-radius ball with linear rates (regularize as it is done in reductions if necessary, although you'd need smoothness of the losses) and then optimize in another ball with center equal to the previously computed point. In the online learning setting, it is harder but you could in principle try to run Riemannian OGC by minimizing the sort of FTRL objective you have by using this reduction (you'd have to optimize with linear rates in the intersection of constant curvature balls and your sets, but every time the infeasible projection would be easy). Or just implement the separation oracle by looking at making progress in constant-curvature balls. Anyway, maybe it doesn't work for you, but I hope it is useful to you to at least know that a problem of this kind has been sort of solved in Riemannian optimization.
# Minor suggestions / typos
L62 "While a horosphere gsc-convex" ->""While a horosphere is gsc-convex"
L200 "theLipschitz" -> "the Lipschitz"
L259 Alg 5. I would not say target vector y, since y is a point.
L269 Alg 6. "initial vector y" -> "initial vector y_i" (although again, I would not say initial *vector* )
# Edit after rebuttal
The authors provided fixes for the proof errors and a new algorithm two work with two point bandit feedback. I increased my score from reject to accept (from 3 to 7)
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: see above
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Your insightful comments and constructive criticism are highly valued. We've responded to your specific questions and concerns in the following sections.
>The paper contains some mistakes, and I am not sure if one could fix at least one of them while keeping the stated results.
Thank you for bringing this matter to our attention. We have indeed found that the geometric distortion was underestimated, but we have identified a quick fix based on Lemma 5 from Zhang & Sra (2016):
$$
a^2 \leq \zeta(\kappa,c) b^2+c^2-2bc\cos A.
$$
First, we amend Definition 1 to express $\zeta$ as $\zeta \coloneqq \zeta(\kappa,2R)$.
For the proof of Lemma 1, it suffices to ensure that the inequality $d(\tilde{y}_t,x)\leq 2R$ holds for any $\tilde{y}_t\in B_p(R)$ and $x\in\tilde{\mathcal{K}}$. Because $\tilde{\mathcal{K}}\subseteq \mathcal{K}\subseteq B_p(R){}$, by the triangle inequality:
$$
d(\tilde{y}_t,x)\leq d(\tilde{y}_t,p)+d(p,x)\leq 2R.
$$
In the proof of Lemma 4, we have utilized Lemma 2. To validate that $\zeta$ describes the correct geometric distortion, we need to prove that $d(y_i,z)\leq 2R$ holds for any $i\geq 1$ and $z\in(1-\delta)\mathcal{K}$. This can be achieved through induction. The case for $i=1$ is easy. As $y_1\in B_p(R)$, $z\in(1-\delta)\mathcal{K}\subseteq B_p(R)$, the following relationship holds:
$$
d(y_1,z)\leq d(y_1,p)+d(z,p)\leq 2R.
$$
Now assume $d(y_i,z)\leq 2R$ holds for some $i\geq 1$ and any $z\in(1-\delta)\mathcal{K}$, then $\zeta$ is a valid geometric distortion. By Lemma 5 of Zhang \& Sra (2016) and our Lemma 2,
$$
d(y_{i+1},z)^2\leq \zeta d(y_{i+1},y_i)^2+d(y_i,z)^2-2\left<\text{Exp}_{y_i}^{-1}y_{i+1},\text{Exp}_{y_i}^{-1}z\right> \leq d(y_i,z)^2-\frac{\delta^2\bar{r}^2}{\zeta}\leq (2R)^2.
$$
Thus $d(y_{i},z)\leq 2R$ holds for any $i\geq 1$ and $z\in(1-\delta)\mathcal{K}$, and the mistake was fixed.
> Moreover algorithm 4 has a regret guarantee in Theorem 2 which is non standard $\dots$
Thank you for your keen observations. We employed this non-standard setting to circumvent a fundamental difficulty on Hadamard manifolds, where $(1-\delta)\mathcal{K}$ is non-convex. During the rebuttal phase, we discovered a method to eliminate this drawback and achieve $O(\sqrt{T})$ regret by using an SO for $\mathcal{K}$ within the two-point feedback setting. The revised algorithm is outlined below:
> Algorithm: Riemannian Projection-free BCO with Two-point Feedback
> Parameters: $\beta, \delta, \delta'$; $\delta\in(0,1)$, $\beta\in(0,1)$, $\delta'=(1-\beta)\frac{\sqrt{-\kappa}(R+r)}{\sinh(\sqrt{-\kappa}(R+r))}\cdot r$.
> Initialize $x_1\in\beta\mathcal{K}$, $y_1=\text{Exp}_p(\frac{\text{Exp}_p^{-1}x_1}{\beta})\quad$ //$y_1\in\mathcal{K}$
>For $t=1,\dots,T$:
>
>> Sample $z_t\sim\mathbb{S}_{x_t}(\delta')$
>>
>> Play $z_t$ and its antipodal point
>>
>> Observe $f_t$ at $z_t$ and the antipodal point
>>
>>Construct $g_t$ by the estimator in Algorithm 3 of Wang et al. (2023)
>>
>>$y_{t+1}'=\text{Exp}\_{y_t}\left( -\eta \frac{S\_{\delta'}}{V_{\delta'}} \Gamma_{x_t}^{y_t}g_t \right)$
>>
>>$y_{t+1}\leftarrow$Output of Algorithm 2 with $\mathcal{K},r,\delta$ and $y_{t+1}'\quad$ //$y_{t+1}\in\mathcal{K}$
>>
>>$x_{t+1}=\text{Exp}_p(\beta \text{Exp}\_p^{-1}y\_{t+1})\quad$ // $x_t\in\beta\mathcal{K}$
>
>End For
Proof Sketch: following (15), (18) in our draft, we have
$$
E[\hat{f}\_t(x_t)-\hat{f}\_t(x)]\leq \frac{S_{\delta'}}{V_{\delta'}}E[\left< g_t,-\Gamma_{y_t}^{x_t}\text{Exp}^{-1}_{y_t}x\right>+\left< g_t,\Gamma_{y_t}^{x_t}\text{Exp}^{-1}_{y_t}x-\text{Exp}^{-1}_{x_t}x\right>]+2\delta'\rho G.
$$
For the two-point feedback model, by Lemma 17 of Wang et al. (2023), $\frac{S_{\delta'}}{V_{\delta'}}E[\\|g_t\\|]=O(1)$. Thus,
$$
\frac{S_{\delta'}}{V_{\delta'}}E[\left< g_t,\Gamma_{y_t}^{x_t}\text{Exp}^{-1}\_{y_t}x-\text{Exp}^{-1}_{x_t}x\right>]\leq\frac{S\_{\delta'}}{V\_{\delta'}}E[\\|g_t\\|]\cdot \zeta d(x_t,y_t)=O(1)\cdot \zeta(1-\beta)d(y_t,p)=O(\delta'),
$$
where we use the $\zeta$ smoothness of $\frac{1}{2}d(x,y)^2$ and $1-\beta=O(\delta')$.
Since $y_{t+1}$ is an infeasible projection of $y_{t+1}'$ onto $(1-\delta)\mathcal{K}$,
$$
d(y_{t+1},x)^2\leq d(y_{t+1}',x)^2\leq d(y_t,x)^2+\zeta\eta^2\frac{S_{\delta'}^2}{V_{\delta'}^2}\\|g_t\\|^2-2\eta\frac{S_{\delta'}}{V_{\delta'}}\left< g_t,-\Gamma_{y_t}^{x_t}\text{Exp}^{-1}_{y_t}x\right>
$$
holds for any $x\in(1-\delta)\mathcal{K}$. Combining the above inequalities, we have
$$
E[\hat{f}_t(x_t)-\hat{f}\_t(x)]\leq\frac{d(y_t,x)^2-d(y\_{t+1},x)^2}{2\eta}+O(\eta)+O(\delta').
$$
holds for any $x\in(1-\delta)\mathcal{K}$. Also, following (20), (21) and taking $x=\text{Exp}_p((1-\delta)\text{Exp}^{-1}_p x^*)$, we have $E[f_t(z_t)-\hat{f}_t(x_t)]=O(\delta')$ and $E[\hat{f}_t(x)-f_t(x^*)]=O(\delta+\delta')$.
In sum,
$$
\sum_{t=1}^TE[f_t(z_t)-f_t(x^*)]=O\left(\frac{1}{\eta}+(\eta+\delta+\delta')T\right).
$$
Choosing $\eta,\delta,\delta'=O(\frac{1}{\sqrt{T}})$, we get $O(\sqrt{T})$ regret.
Your comments or suggestions on this idea would be greatly appreciated. We are more than willing to replace the current BCO result with this revised version, as it resolves the problem related to the non-standard setting.
> Lemmas 3 and 4 yield that the number of iterations of algorithm 2 is exponential on R $\dots$
Thanks for your detailed suggestions. We will consider this interesting problem in the near future.
> You should specify in lemma 5, that you always use $y\in B_p(R)$. You probably cannot compute $\sigma_i$ in Algorithm 5 in close form.
Thank you for your insightful suggestion. You are indeed correct, and we appreciate your pointing this out. We will provide more detailed discussions in our revised version.
> "(pinpointing a compact set that contains the true optimum can improve time complexity) $\dots$ without extra distortions.
Thank you for sharing your insightful understanding of Riemannian metric projection. We will make sure to carefully revise these sentences in the updated version of our work.
---
Rebuttal Comment 1.1:
Title: reply
Comment: Thanks for the reply. Indeed that fixes the proof.
I would not call your new bandit algorithm "a method to eliminate this drawback" since the new result is different. It works in a different model (and as a consequence, you can get a \sqrt{T} bound instead of T^{3/4}). In any case, that result is indeed interesting and the paper has several other contributions that are interesting. Please add the new result in full detail to the paper and keep the other one. You can emphasize that the other one does not work with a standard notion of regret.
I am increasing my score from reject to accept (3 to 7).
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you for your valuable feedback on the paper and for increasing the evaluation score! We will carefully revise the paper based on your comments. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a generalization for online learning on the Riemannian manifolds via separation and linear optimization oracles. The core idea here is to use the oracles to construct an infeasible projection, which may not be the nearest point in the constrain/decision set $\mathcal{K}$, but are in $\mathcal{K}$ and is closer to a shrunk feasible set $(1-\delta)\mathcal{K}$ than the original point. The interesting point finding here is that because there is a buffer between $\mathcal{K}$ and $(1-\delta)\mathcal{K}$, we have a constant distance decrease of $O(\delta^2)$ (Lemma 4 eq (5) in the appendix) w.r.t. by the separating oracle direction to the shrunk set $(1-\delta)\mathcal{K}$, and hence the projection to $\mathcal{K}$ can be done in constant time to $\delta$. This means we can construct an infeasible/inaccurate projection by the separation oracle in $O(1/\delta^2)$ time, and the algorithm follows from replacing the projection part with the infeasible projection and bound the errors. The authors also consider the linear optimization oracles, and it's done by constructing the separation oracles using the linear optimization oracles in Frank-Wolfe. Actually, this draft is a combination between Garber & Kretzu (2022) and Wang et al. (2021), where the authors generalize the online learning by SO/LOO framework by Garber & Kretzu (2022) with the tools and assumptions in Wang et al. (2021).
Strengths: The idea of constructing an infeasible projection (the solution is feasible to $\mathcal{K}$) and providing guarantees to a shrunk set $(1-\delta)\mathcal{K}$ is very interesting, and it's good to see it works on the Riemannian manifold. The introduction is pretty well-written, and there is no major error in the proof, and generalizing the result from Euclidean space to the Riemannian space may be challenging in some corner cases.
Weaknesses: The major weakness of the paper is its complexity, the lack of experiments, and the limiting assumptions.
First, the proof in the paper is pretty complex and is not self-contained. To verify any proof in the paper, the readers usually need to go to Garber & Kretzu (2022) and Wang et al. (2021), which have another set of symbols and may again point to other papers. And this may lead to confusion for both the authors and the readers. For example, the Lemma 19 in the draft points to Wang et al. 2023 Lemma 45, which requires the curvature to be also upper-bounded, but such assumption and coefficient disappears in the draft. Garber & Kretzu (2022) assume that the feasible set contains the origin. The paper generalized it to arbitrary point $p$, but in that case, the shrinking set $\mathcal{K}$ should be redefined toward $p$ for $(1-\delta)\mathcal{K}\subset\mathcal{K}$, but it is also not done. The proof in the paper is mostly correct when self-contained, but it is very time-consuming to verify the mentioned errors from cited lemmas by chasing around the links, so I am not very sure about the correctness of the theoretical paper.
Second, I am not very sure about the practical aspect of the paper due to its limiting assumptions and lack of experiments. Surely, theoretical papers may not have been experimented with, but the paper is too complex to verify its correctness, so it would be good to show the reader that it works in a sense. I am concerned about the limitation in the assumptions because most of the examples mentioned in the draft's introduction don't fit the assumption, e.g., the spherical constraint in K-means clustering. The manifold must be bounded, containing a non-empty unit ball, and contained within a unit ball of radius $\leq \pi/\sqrt{4\kappa_2}$ (Lemma 19). And I am unsure when the separation oracle and linear optimization oracle are cheaper than the projection in a practical example. Please give the readers more examples to understand when the setting would be suitable.
Finally, the paper is a direct combination between Garber & Kretzu (2022) and Wang et al. (2021). Most of the work here looks like a Riemannian generalization of Garber & Kretzu's (2022)'s framework with Wang et al. (2021,2023)'s tools, replacing the distance with exponential maps and bound them with Wang et al. (2021,2023)'s inequalities. And there are not many new surprising results, but double the complexity. It is not necessary to be novel to publish, but given that I'm not convinced that the results are totally correct, I would recommend a weak rejection. This draft should be more suited to a journal than a conference.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. L32: Computing the orthogonal projection of "polytope" to half-spaces is NP-hard, but point projection is not. So this is unrelated to the draft.
2. L161: May you explain the existence of g in Lemma 18 implies an efficient implementation?
3. Assumptions & Lemma 19 requires $R\leq \pi/\sqrt{4\kappa_2}$, but it's not in the assumptions and coefficients.
4. Lemma 29 is different from the original paper. Please note that you use the monotonicity of $x coth(x)$ here.
5. L179: Please cite the projection algorithm in $O(\log(1/\epsilon)$
6. Algorithm 2 takes $\delta$ as an input but the $\delta$ doesn't appears in the algorithm. Please at least define $\gamma$ as a function.
7. Appendix L430 eq (3): Please annotate that the later line suffices the inequality.
8. Algorithm 2: It's also using one membership oracle at each step.
9. Appendix L456: You use the bounded gradient property, not the Lipschitzness of the gradient.
10. Algorithm 4: This looks pretty much like the R-BAN. Why R-BAN sample from a sphere but the algorithm samples from a ball?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and suggestions. We appreciate the time and effort spent in reviewing our work, and we will revise the draft accordingly. Below, we address your specific questions and concerns:
> For example, the Lemma 19 in the draft points to Wang et al. 2023 Lemma 45, which requires the curvature to be also upper-bounded, but such assumption and coefficient disappears in the draft. Garber & Kretzu (2022) assume that the feasible set contains the origin. The paper generalized it to arbitrary point $p$, but in that case, the shrinking set $\mathcal{K}$ should be redefined toward $p$ for $(1-\delta)\mathcal{K}\subset \mathcal{K}$, but it is also not done.
This might be a misunderstanding. As you pointed out, Lemma 19 requires the sectional curvature to be both lower and upper bounded. We assumed the manifold to be Hadamard and defined the curvature lower bound in Assumption 1, and the sectional curvature of Hadamard manifolds is upper bounded by $0$ (Lines 149-151), so we did not overlook this point. Additionally, we defined the shrinking set $(1-\delta)\mathcal{K}$ for some fixed $p$ in Definition 2.
> The proof of the paper is pretty complex and is not self-contained.
We acknowledge that our proof is somewhat complex due to the Jacobi field technique. To make the paper more accessible, we have introduced related background in Appendix E.1. We plan to further improve readability in the revised version and welcome any comments or suggestions.
> I am not very sure about the practical aspect of the paper due to its limiting assumptions and lack of experiments. $\dots$ Please give the readers more examples to understand when the setting would be suitable.
We mainly consider Hadamard manifolds, a standard assumption (Zhang and Sra, 2016; Wang et al., 2021). While more restrictive than general Riemannian manifolds, practical examples exist, such as the geometric mean and the Bures-Wasserstein barycenter on the manifold of SPD matrices (Line 165). In these two specific examples, Weber & Sra (2022b) demonstrate that, WLOG, the feasible set can be assumed to be $\mathcal{K}=\\{X|L\preceq X\preceq U\\}$, where $L$ and $U$ are SPD matrices. Weber & Sra (2022b) establish that the linear optimization oracle for $\mathcal{K}$ admits a closed-form solution. We show a separation oracle for $\mathcal{K}$ can also be efficiently implemented. Under the affine-linear metric, we have $\left<A,B\right>_Y=\text{tr}(Y^{-1}AY^{-1}B)$ and $\text{Exp}_Y^{-1}X=Y^{\frac{1}{2}}\log(Y^{-\frac{1}{2}}XY^{-\frac{1}{2}})Y^{\frac{1}{2}}$. By some computation, for any $X\in\mathcal{K}$, we have:
$$\left< -\text{Exp}_Y^{-1}X,Y \right>_Y>0\text{ when }Y\succ U$$
and
$$\left< -\text{Exp}_Y^{-1}X,-Y \right>_Y>0\text{ when }Y\prec L$$
Thus both optimization oracles can be efficiently computed. We agree that numerical experiments would enhance our work and will include some experiments on the SPD manifold in the revised version.
> The paper is a direct combination between Garber \& Kretzu (2022) and Wang et al. (2021).
Thank you for your comment. We acknowledge the similarities between our work and the works of Garber & Kretzu (2022) and Wang et al. (2021, 2023), but it also features several significant and novel contributions:
1. We identify that the shrinking set $(1-\delta)\mathcal{K}$ is non-convex and subsequently extend the definition of infeasible projection initially presented by Garber & Kretzu (2022) to accommodate non-convex settings.
2. Quantifying the progress made with each call to the separation oracle presents a highly non-trivial challenge. We overcome this in Lemma 3, where we amalgamate Lemma 19 (equivalent to Lemma 45 in Wang et al. (2023)) with meticulous computation involving Jacobi field comparisons.
3. Concerning the linear optimization oracle, deriving a separating hyperplane using Riemannian Frank-Wolfe stands as the principal hurdle. We manage to overcome this through an innovative application of Riemannian cosine laws. For further insights, we refer readers to Remark 2 and the proof of Lemma 5 in our manuscript.
> L161: May you explain the existence of g in Lemma 18 implies an efficient implementation?
We consider a gsc-convex set $\mathcal{K}=\\{x|\max_{1\leq i\leq m}{h_i(x)}\leq 0\\}$ where each $h_i(x)$ is gsc-convex. Given a point $y\notin\mathcal{K}$, we can check the sign of $h_i(y)$ for $i=1,\dots,m$ until we find $i^*$ such that $h_{i^*}(y)>0$, then by Lemma 18, we have
$$
-\left<\operatorname{Exp}_{y}^{-1}x,\nabla h_{i^*}(y)\right> >0
$$
holds for any $x\in\mathcal{K}$ and thus we get a separation oracle between $y$ and $\mathcal{K}$. Computing this separation oracle requires at most $m$ function value evaluations and $1$ gradient evaluation, which can be easily implemented when $m$ is moderate.
> Assumptions \& Lemma 19 requires $R\leq \pi/\sqrt{4\kappa_2}$, but it's not in the assumptions and coefficients.
In Assumption 1, we assumed $\mathcal{M}$ is Hadamard, so $\kappa_2=0$. In this case, we do not need to assume the manifold is bounded because for any two different points, there exists a global length-minimizing geodesic connecting them.
> L179: Please cite the projection algorithm in $O(\log(1/\epsilon))$.
Let $B_p(r)$ be a geodesic ball with center $p$ and radius $r$ and $x$ be a point outside of $B_p(r)$, and we intend to compute the projection of $x$ onto $B_p(r)$. We first compute the geodesic connecting $x$ and $p$, say, $\gamma(t)$, which intersects the sphere at $z$. Then by Gauss’s Lemma (Theorem 6.8, Lee 2006), the tangent vector of the geodesic $\gamma(t)$ at $z$ is perpendicular to the geodesic sphere $S_p(r)$, which means $z$ is the projection of $x$ onto $B_p(r)$. So it suffices to do $O(\log(1/\epsilon))$ binary search on the geodesic $\gamma(t)$ to get an approximation of $z$ up to $\epsilon$ precision. For the $t$-th binary search, we merely need to verify whether $d(x_t,p)\geq r$ is true, which is computationally efficient.
---
Rebuttal Comment 1.1:
Comment: > Definition of shrinking set
Sorry to skipped the definition and the non-positive curvature of the Hadamard space. For clarity, I think it would be better to put the later near the assumptions.
After reading the authors' rebuttal, I decide to bump my score. However, I am still not confident about the correctness of the draft because it is not self-contained and think the draft is more suitable for a journal than a conference.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you for your constructive feedback and for adjusting the evaluation score upwards! We will clarify the non-positive curvature characteristic of Hadamard manifolds near the assumptions. | null | null | null | null | null | null |
A Neural Collapse Perspective on Feature Evolution in Graph Neural Networks | Accept (poster) | Summary: The paper investigates the relationship between graph topology and feature evolution in Graph Neural Networks (GNNs). The paper starts by discussing the phenomenon of Neural Collapse (NC) in instance-wise deep classifiers, where within-class variability decreases and class means align to specific symmetric structures. The study is then extended to node-wise classification using Stochastic Block Model (SBM) graphs. An empirical study reveals a decrease in within-class variability in GNNs trained for node classification on SBMs, although not as pronounced as in the instance-wise setting. The authors propose a graph-based mathematical model to understand the influence of node neighborhood patterns and community labels on NC dynamics. The model requires strict structural conditions on the graphs to exhibit exact variability collapse, highlighting the distinction between GNNs and plain deep neural networks. Gradient dynamics analysis of the graph-based model provides theoretical explanations for the observed partial collapse in GNNs. The paper also explores the evolution of features in well-trained GNNs and compares it to spectral clustering methods. Overall, the paper investigates feature evolution in GNNs and provides insights into the impact of graph topology on this process.
Strengths: 1. This paper focuses on an interesting topic: Neural Collapse during the training of graph neural networks.
2. The paper concretely shows the decrease of within-class variability.
Weaknesses: 1. Analyzing only intra-class (within-class) variability without discussing inter-class variability is meaningless.
2. The study of over-smoothing [1] phenomena has also discussed the decrease of within-class variability. A discussion about the difference between within-class variability caused by over-smoothing and neural collapse can clarify the contribution of this work.
3. The theoretical analysis presented in this work has limited practical utility.
[1] Keriven, Nicolas. "Not too little, not too much: a theoretical analysis of graph (over) smoothing." *arXiv preprint arXiv:2205.12156* (2022).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Can the theoretical analysis in this work lead to any practical design improvements?
2. The term "instance-wise case" (line 11) should be defined.
3. The notation of class is used in line 155 but is not defined in the 2.1 data model section.
4. A more detailed discussion should be given about the regime of exact recovery (line 94).
5. The optimizer is defined in line 147, but the loss or objective function to be optimized is not specified.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **General comment:** We would like to thank **Reviewer GzMi** for the helpful feedback.
**Q: Analyzing only intra-class (within-class) variability without discussing inter-class variability is meaningless.**
**A:** We have included a thorough analysis of the increase in between-class variability of the penultimate layer features in Theorem 3.3. These results are also corroborated by experiments.
**Q: The study of over-smoothing [1] phenomena has also discussed the decrease of within-class variability. A discussion about the difference between within-class variability caused by over-smoothing and neural collapse can clarify the contribution of this work.
[1] Keriven, Nicolas. "Not too little, not too much: a theoretical analysis of graph (over) smoothing."**
**A:** Over-smoothing has been widely studied in the literature to model the reduction of within-class **and** between-class feature variability across layers of a GNN during training. On the contrary, neural collapse (NC) studies the reduction in within-class but an "increase" in between-class feature variability (especially of the penultimate layer during training). Interestingly, note that when the penultimate layer features exhibit neural collapse, we can address the over-smoothing problem as neural collapse also represents the maximal separation between feature class means in addition to zero within-class feature variability.
The NC analysis presented in this work covers two aspects: training and inference. In section 3, where the training phase is considered, we analyze the evolution of the penultimate layer features during training (this is a standard practice that various papers analyzing NC have followed). The significance of this approach is that we can now analyze the role of the objective function and the inherent graph structure in determining the "desirable" properties of these features to attain the global minima (Theorem 3.1). The characterization of the strict structural properties for which NC minimizers exist is a unique contribution of this work. Additionally, our gradient flow analysis for these penultimate layer features not only models the decrease in within-class variability but an increase in between-class variability as well. We present the conditions for the amount of regularization needed for this behavior in Theorem 3.3. Thus, presenting a unique perspective on feature evolution compared to previous over-smoothing analyses.
Additionally, in section 4, we present a layer-wise feature evolution in a well-trained GNN. This analysis considers well-trained GNNs in inductive settings for layer-wise analysis and inherently differs from the over-smoothing analysis of [1] which considers networks in the training phase of semi-supervised settings. We will make sure to include this discussion in the main text. Thank you again for highlighting this point.
**Q: The theoretical analysis presented in this work has limited practical utility.**
**A:** We believe that our work can have the following practical significance:
1. The results in Theorem 3.1 on the strict structural condition of the graph and the gradient flow analysis in Theorem 3.3 indicates that contrary to the case of DNNs where MSE loss leads to collapsed minimizers, the feature evolution in GNNs during training is leading only to partial collapse (even in the most simplistic settings). This result sheds light on the impact of structural conditions on the ideal feature configurations of an expressive GNN.
2. Additionally, observe that attaining neural collapse solves the over-smoothing problem as neural collapse represents the maximal separation between feature class means in addition to zero within-class feature variability. This is indeed an important takeaway to the GNN community, which we will clarify in the revision.
3. A recent work by Ma et,al [1] showed that homophily might not be necessary for good GNN performance on node classification tasks. By leveraging Theorem 3.1 from this work, we can observe that the structural condition (dubbed condition C) required for collapsed minimizers addresses both the homophilic and heterophilic graphs. Thus, our results provide insight into these surprising empirical results [1].
4. Our work can potentially shed light on an ideal "graph-rewiring" strategy for improved GNN performance as we know the nature of the minimizers when condition C is satisfied by the computational graph.
5. We believe that extensions of our analysis to semi-supervised /self-supervised settings would be of great value to the community. For instance, the structural conditions that we analyze in this work can be used for graph augmentation in graph contrastive learning.
[1] Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. Is homophily a necessity for graph neural networks? In International Conference on Learning Representations (ICLR), 2022
**Q: The term "instance-wise case" (line 11) should be defined.**
**A:** Footnote 1 clarifies this definition. We will incorporate it into the main text for clarity. Throughout the paper, by (plain) DNNs we mean networks that output an instance-wise prediction (e.g.,image class rather than pixel class), while by GNNs we mean networks that output node-wise predictions.
**Q: The notation of class is used in line 155 but is not defined in the 2.1 data model section.**
**A:** It is defined in the setup section (line 74).
**Q: A more detailed discussion should be given about the regime of exact recovery (line 94).**
**A:** Thanks for the suggestion. We will add a discussion in the revised version.
**Q: The optimizer is defined in line 147, but the loss or objective function to be optimized is not specified..**
**A:** We consider the MSE loss in our analysis. It is specified in line 78, above equation 2.
---
Rebuttal Comment 1.1:
Title: Reply to Author Rebuttal
Comment: Thank you for the detailed explanations provided.
Regarding my initial query, could you point me to the specific experiments that support your assertion of an "increase in between-class variability"? In line 247, you've showcased empirical results that indicate a gradual decrease in the NC1 metrics as the network depth increases. Would it be possible for you to measure the NC2 under the same experimental conditions?
---
Reply to Comment 1.1.1:
Comment: **We thank the reviewer for the response.**
**Q: Regarding my initial query, could you point me to the specific experiments that support your assertion of an "increase in between-class variability"?**
**A:** Our claim on the "increase in between-class variability" along the optimization is supported by the gUFM experiments (setup detailed in lines 201-209), whose results are illustrated in Figures 3 and 4. Our gradient flow results in Theorem 3.3 (and its proof) deal with this gUFM setting and states in item (2) the increase in between-class variability, under suitable assumptions (required for rigorous theoretical derivation). Importantly: (a) Our analysis shows a reduction in $\widetilde{NC}_1$ metric that scales the within-class variability by the between-class variability (Eq 9 in main paper); (b) The empirical and theoretical analysis of the gUFM behavior serves as a good approximation of the GNNs behavior in Figures 1 and 2 (Although in Fig 2, $Tr(\Sigma_B)$ slightly decreases, but not at the rate of $Tr(\Sigma_W)$, thus leading to a reduction in $\widetilde{NC}_1$).
**Q: In line 247, you've showcased empirical results that indicate a gradual decrease in the NC1 metrics as the network depth increases. Would it be possible for you to measure the NC2 under the same experimental conditions?**
**A:** Following the reviewer's suggestion, we currently conduct the proposed experiments for obtaining the version of Figures 5 and 6 that examine NC2 metric (while Figures 5 and 6 examine NC1). As a takeaway, we did not observe any significant alignments of weights, class means with ETF/OF across depth in this setting (i.e., during inference). Although these experiments do not affect the main message of the paper, we will add them to the final version for promoting future research along this direction. Thank you for the suggestion. | Summary: This paper investigates the feature evolution in Graph Neural Networks (GNNs) via the lens of Neural Collapse. They conduct an empirical study that reveals a decrease in within-class variability in the deepest features of GNNs, but not to the extent observed in instance-wise classification settings. By proposing and analyzing a graph-based Unconstrained Features Model (UFM), the authors show that a strict structural condition on the graphs is necessary for exact variability collapse, which is a rare event. They provide theoretical reasoning for the partial collapse observed during GNNs training and study the evolution of features across layers in well-trained GNNs, comparing the decrease in NC metrics with power iterations in spectral clustering methods.
Strengths: This paper presents a pioneering study on Neural Collapse (NC) in Graph Convolutional Networks (GCNs) and the forward process of a neural network, which has the potential to inspire future work. The authors provide unique and valuable theoretical results, such as the partial collapse in Theorem 3.1. Additionally, the paper is well-written, the methodology is clearly explained, and the proposed approaches appear to be solid.
Weaknesses: This paper has a few areas that could be improved upon.
1. Some experimental findings could benefit from stronger support through theoretical results (Please refer to Questions 1-3).
2. The applications and implications of certain theorems could be further elaborated (Please refer to Questions 4-5).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: To enhance the manuscript, the authors may consider addressing the following points:
1. In addition to the collapse of variability (NC1), the preference towards a simplex ETF (NC2) is an important observation in the original NC paper [34]. From Figure 1(c), it seems that some variant of NC2 might also hold in GCNs. To provide a more comprehensive understanding of NC, it would be helpful if the authors could include a theorem or at least a discussion on NC2.
2. While Theorem 3.1 and Theorem 3.2 demonstrate that the within-class variability is non-zero with high probability, the authors have not provided a lower bound for the variability concerning p and q. Analyzing this lower bound could strengthen the paper's contribution. For example, the authors might consider analyzing the expected variability under SSBM.
3. Theorem 3.3 appears to be somewhat weaker compared to existing studies on NC (e.g., [54]). It would be beneficial to provide a local or global convergence.
4. In Lines 270 - 276, the authors show that the ratios in GCN behave differently compared to those in power iterations (i.e., simplified graph convolutional networks). It would be insightful if the authors could explain how this difference benefits GCNs. Additionally, the evolutions in $\mathcal{F}$ and $\mathcal{F}'$ are different, and understanding how this difference impacts the performance of $\mathcal{F}$ and $\mathcal{F}'$ would be valuable.
5. Relating Theorem 4.1 to classic results in GNNs (e.g., over-smoothing) could provide further context and strengthen the paper's overall contribution to the field.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **General comment:** We are grateful to **Reviewer pASb** for an encouraging review and for raising interesting questions.
**Q:** Regarding theoretical results and a discussion on NC2.
**A:** We agree with the reviewer's point that NC2 might also hold to some extent for GNNs. We show these metrics in Appendix F. In those experiments, NC2 metrics indicate that the (centered) class-mean features do not align significantly with a simplex ETF. This is due to $\hat{A}$ in the risk formulation, for which, our understanding is not theoretically rigorous yet. We attempted to analyze the risk via a matrix factorization approach as followed by previous efforts [1]. The SVD approach on $W_2H\hat{A}$ along the "central path"[2, 1] is complicated by the presence of $V^T_HU_{\hat{A}}$, where $V_H$ represents the right-singular basis for $H$ and $U_{\hat{A}}$ represents the left-singular basis for $\hat{A}$. These bases are not guaranteed to align (at least the assumption of $V^T_HU_{\hat{A}}$ is diagonal seemed too strong). Without it, the analysis is extremely difficult to obtain the properties of the singular values (s.v) of $H$ w.r.t those of $\hat{A}$. Additionally, in the non-GNN NC literature, there are efforts that fix the last layer weights to a simplex ETF and observe only a minimal drop in DNN performance. Sometimes, this also helps in addressing class-imbalance issues.
**Q: Regarding the lower bound for the variability w.r.t p and q.**
**A:** In Theorem 3.1 we obtained the conditions on the graph structure for which NC solutions are the minimizers of the risk. Also, since the expected adjacency matrix $\mathbb{E}\hat{A}$ satisfies this condition, neural collapse is desirable. However, to lower-bound the variability (which pertains to the partial collapse result that we see in the experiments), we believe our gradient flow analysis in Theorem 3.3 can serve as a starting point. Through this analysis, we obtain the rate of change of trace of within and between class variability along the flow (Appendix D.4). These rates depend on the perturbation from the expected SSBM structure. Thus, the optimal within-class/between-class variability depends on the perturbation (which in turn depends on $p,q$). The challenge with obtaining an exact lower-bound lies in calculating the derivatives of terms such as $HEE^\top H^\top$ (notation as per proof of Theorem 3.3), which turned out to be more complicated than expected. An alternative approach can involve a spectral analysis of $E$ in terms of $p,q$. However, we believe it requires a separate study on its own.
**Q: Theorem 3.3 appears to be somewhat weaker compared to existing studies on NC (e.g., [54]). It would be beneficial to provide a local or global convergence.**
**A:** The work of [54] (ref in paper) analyzed the loss landscape and convergence when the risk is based on cross-entropy loss. However, they do not analyze the gradient dynamics. Other NC works that analyze gradient dynamics [15,40] (ref in paper) consider only a plain (non-graph) UFM, which is much simpler to analyze (and the gradient flow is ensured to reach an NC minimizer, which is not the case with gUFM). In Theorem 3.3 we analyze NC1 along the gradient dynamics of a gUFM which is much more involved as the flow is not guaranteed to reach an NC minimizer.
**Q: Regarding the different trace ratios for spectral methods and GNNs.**
**A:** Through the analysis in section 4, we wish to characterize the nature of the learnable weights that allow a GNN to perform "better" than spectral methods. In this context, "better" indicates that the GNN requires less number of layers than the number of projected power iterations by spectral methods for reduction in NC1 (and to achieve good inference results). In Appendix.F, we show additional experiments with depths of 64, 128. The takeaway from our observations is that the weights $W_1, W_2$ of the GNNs are playing a major role in projecting the features and aiding in faster reduction rates of NC1. Since spectral methods do not comprise any learnable projections, the benefits of GNNs are evident in this scenario.
Regarding the GNNs $\mathcal{F}, \mathcal{F}'$, we have not observed any significant difference in their performance that may be caused due to varying rates of reduction in NC1. It seems like the presence of $W_1$ is delaying the NC1 reduction, but both networks reach approximately the same level of NC1 at the last layer. From a practical viewpoint, one can look at GNN layer pruning based on NC1 reduction rates for efficient inference.
**Q: Regarding section 4 (Theorem 4.1) and the connection with over-smoothing**
**A:** Indeed, over-smoothing has been widely studied in the literature to model the reduction of within-class and between-class feature variability across layers of a GNN during training. On the contrary, NC studies the reduction in within-class and an "increase" in between-class feature variability (especially of the penultimate layer during training). Interestingly, when the penultimate layer features exhibit NC, we can say that the over-smoothing problem is resolved. This is indeed an important takeaway to the GNN community, which we will clarify in the revision.
However, our analysis in section 4 considers well-trained GNNs in inductive settings for layer-wise analysis during **inference**. This setup inherently differs from the over-smoothing analysis (for ex: [3]) which considers the training phase in semi-supervised settings. Interestingly, even during inference, we found variability reduction patterns (fig 5) that resemble partial over-smoothing observed by [3].
[1] Tirer, Tom, et,al. "Extended unconstrained features model for exploring deep neural collapse." ICML, 2022.
[2] Han, X. Y., et.al. "Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path." ICLR. 2021
[3] Keriven, Nicolas. "Not too little, not too much: a theoretical analysis of graph (over) smoothing." NeurIPS 2022. | Summary: This work tries to investigate the feature evolution in GNNs in inductive setting. In particular, it focuses on the Neural Collapse problem using the SSBM data model to ensure the existence of this phenomenon. Moreover, it proposes to verify that the extent of NC in GNNs is not as severe as in instance-wise classifications.
Strengths: 1. It is interesting to study the feature evolution with GNNs and especially the correlation between the topological information.
2. The data model is clearly presented and the experiments are close to the theoretical results.
Weaknesses: 1. The study of feature evolution with GNNs and especially the correlation between the topological information is a great topic, however, the data model and the concentration on the NC condition slightly retreat the original claim into a theoretically friendly but less practical direction.
2. Most concerns are raised about the practical value of this study, which lies in two aspects:
1. the motivation of this work needs to be further clarified, e.g. why the NC phenomenon is important, what about other stages of trait development.
2. data model: SSBM is a good model to study the topological features.
My concerns are mainly about the practical significance of this paper. If I have missed anything important in the proofs and arguments, please let me know.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Major:
1. A more comprehensive motivation for studying feature evolution in GNNs is highly recommended. And there is another smoothing problem: If the paper focuses on the NC view, the pre-convergence stage is completely missing. Since for most of the case or GNN practitioners are doing with pre-convergence stage, which should be more worthy to study. It is just to make sure that the problem and the perspective are really practical.
2. For the inductive setting, which this paper concentrates on, it is not necessary that the test nodes are in a different graph than the training nodes. It only requires that the test nodes do not appear in the training phase.
3. The experiments are performed only on the graphs simulated by the SSBM assumptions.
2. Minor:
1. Line 53-54: Essentially, we highlight the main differences in the analysis of NC in GNNs by identifying structural conditions on the graphs under which the global minimizers of the training objective exhibit full NC1. Hard to understand, you can split it - since it is one of the main findings of the paper, which needs to be clear enough.
2. Line 79-80, the MLE loss is more and more popular to train DNN classifiers might be misleading somehow. However, MLE is a well-known alternative for more complex loss functions in machine learning theories.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are not included by the authors. As I presented questions above, my concerns are mainly about the practical significance of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **General note:** We would like to thank **Reviewer t9NA** for the constructive feedback. The 2 minor issues will be fixed in the revision.
**The importance of neural collapse (NC).** In standard DNN settings (e.g., image classification on MNIST), the classifiers tend to exhibit NC once they perfectly classify the training data (i,e during TPT). NC includes both: "reduction" in the features’ within-class variability
and "increase" (or stabilization) in the between-class variability, such that the features form certain low-dimensional geometric structures. This phenomenon has allowed the community to understand the benign effects of overparameterization and the benefits of training beyond the "zero-classification-error" stage for DNNs (e.g., cases with better test accuracy and improved adversarial robustness [1]). Importantly, we demonstrate that the case with GNNs is not similar to any previous work on NC as the structural constraints of graphs tend to hinder the ability of NC. Also, since the over-smoothing problem is common in deep GNNs, NC shows potential to address such key issues and have a promising impact.
**The data and network models.** To explore the level of NC in GNNs in a principled fashion, we start with SSBM graphs, as their properties are well-suited for theoretical GNN analysis [2]. The gUFM model presented in this paper is much more complicated than any theoretical model in previous publications on NC [4] as we retain the graph topology in the analysis. Importantly, even for our “optimistic" model, we show that the desired configuration of the penultimate layer features (in terms of optimization optimality) does not exhibit exact NC unless the graph satisfies a strict structural condition. This immediately implies an inherent difference between practical GNNs and DNNs. Also, note that it seems very difficult to derive the exact training dynamics in GNNs without trivial assumptions on the structure. Thus, we leverage the literature on SSBM graphs and present a rigorous gradient flow analysis to explain the partial collapse observed in our experiments. This analysis is already a major step beyond analyzing only the minimizers of the model.
**Practical significance:**
1. Observe that attaining NC solves the over-smoothing problem as NC represents the maximal separation between feature class means in addition to zero within-class feature variability.
2. Recently, Ma et,al [3] showed that homophily might not be necessary for good GNN performance on node classification tasks. By leveraging Theorem 3.1 from our work, we can observe that the structural condition required for collapsed minimizers addresses both the homophilic and heterophilic graphs. Thus, providing insights into Ma's surprising empirical results.
3. Our work can potentially shed light on an ideal "graph-rewiring" strategy as we know the nature of the minimizers when condition C is satisfied by the computational graph.
**Q: Regarding feature evolution and pre-convergence phases of training.**
**A:** To address the reviewer's comment, we would like to emphasize that the contribution of our analysis is to understand why GNNs are unable to reach the convergence (TPT) phase. Our result in Theorem 3.1 highlights that, when using the MSE loss, even though DNNs exhibit NC minimizers [4], in the case of GNNs, the graphs must satisfy a strict structural condition for collapsed minimizers.
Regarding the pre-convergence phase, we have provided a detailed analysis of the reduction in NC1 along the gradient flow in Theorem 3.3. As a practical takeaway, we have provided bounds on the regularization (Theorem 3.3 proof) that is needed for the trace of between-class feature covariance to increase along the flow. Overall, the reason for initiating this line of work through a theoretical approach is to present optimistic settings in which GNNs fail to exhibit NC and promote further study in the community.
**Q: Regarding test nodes in the inductive setting.**
**A:** We have followed the setup of Chen et, al [5] which has been widely adopted for community detection in supervised settings.
**Q: Regarding experiments only on SSBM graphs**
**A:** Attaining TPT state is not so simple for GNNs on real-world graphs as indicated by a lack of rigorous benchmarks in the community on supervised community detection (in inductive settings). We have performed experiments on the real-world graphs as per the strategy of [5] as follows:
**Dataset.** We consider the "com-amazon" graph from the SNAP collection and prepare 1000 training graphs and 100 test graphs, with each graph having 2 non-overlapping communities from the top 5,000 communities provided by the SNAP collection.
**Baselines and results**: Although the graphs obtained in this fashion are not of the same size and have imbalanced communities, we did observe a partial collapse even in the pre-convergence phase (however, not to the extent observed in the SSBM case as the datasets are relatively complex to attain TPT).
**We thank the reviewer for the comments and will include these key discussions in the revision.**
[1] Papyan, Vardan, et,al "Prevalence of neural collapse during the terminal phase of deep learning training." PNAS 2020
[2] Keriven, Nicolas. "Not too little, not too much: a theoretical analysis of graph (over) smoothing." NeurIPS (2022)
[3] Yao Ma, et al. Is homophily a necessity for graph neural networks? (ICLR), 2022
[4] Han, X. Y., et.al. "Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path." ICLR. 2021
[5] Chen, Zhengdao et al. "Supervised Community Detection with Line Graph Neural Networks." ICLR 2019.
---
Rebuttal Comment 1.1:
Title: Response to the authors' rebuttal
Comment: I thank the authors for their detailed responses to my concerns. Especially the answers regarding "**Regarding feature evolution and pre-convergence phases of training.**" and the data model greatly improve my understanding. I would like to ask three more questions:
1. Talking about over-smoothing phenomenon, over-smoothing should not be a relative issue to NC, since it is not only "(NC1) The within-class variability of the deepest features decreases", but to a global extent.
2. Still, from a practical point of view, the rewriting point should be included in the discussion of this paper, e.g. a section after the experiments, to help build more reasonableness and soundness of this work. I fully understand that your work is from an analytical point of view.
3. Finally, the sentence "in the case of GNNs, the graphs must satisfy a strict structural condition for collapsed minimizers" gives a good explanation, and this should also be made clear after the theoretical analysis (SSBM and UFM), especially by introducing a quantified variable that controls the randomness in real graphs, and how significant it is by doing empirical measurement. To this end, the understanding should be more smoothing. This may be difficult to achieve in this rebuttal stage, but I would recommend doing so in a later revision.
Overall, the authors have addressed most of my concerns well and I would like to raise my score to 5. I ask the authors to be patient with my concerns and suggestions above.
---
Reply to Comment 1.1.1:
Comment: **We greatly appreciate the reviewer's response.**
**Q: Talking about over-smoothing phenomenon, over-smoothing should not be a relative issue to NC, since it is not only "(NC1) The within-class variability of the deepest features decreases", but to a global extent.**
**A:** Thanks for the comment. If we understand the comment correctly, by "global extent" do you mean that oversmoothing also pertains to a reduction of between-class variability, in addition to a reduction in within-class variability? If yes, then note that, as oversmoothing typically pertains to a reduction in between-class and within-class variability "across layers during training", the deepest features tend to be affected the most. However, with an NC analysis of the penultimate layer features (deepest features), we identify the conditions (pertaining to graph structure and regularization) under which the within-class variability "decreases" and between-class variability "increases" for the deepest features during training. Thus, potentially mitigating the effects of over-smoothing.
**Q: Still, from a practical point of view, the rewriting point should be included in the discussion of this paper, e.g. a section after the experiments, to help build more reasonableness and soundness of this work.**
**A:** We will add a brief discussion on the graph rewiring aspects in the revised version. Thank you for the suggestion.
**Q: Finally, the sentence "in the case of GNNs, the graphs must satisfy a strict structural condition for collapsed minimizers" gives a good explanation, and this should also be made clear after the theoretical analysis (SSBM and UFM), especially by introducing a quantified variable that controls the randomness in real graphs, and how significant it is by doing empirical measurement. To this end, the understanding should be more smoothing.**
**A:** We added a statement in lines (180-181) before Theorem 3.1 which conveys this message.
And following the reviewer’s suggestion, we will add a statement after the theoretical results as well
to effectively convey the implications.
Regarding the suggested experiments: If we understand it correctly, the reviewer suggests to introduce
a variable that controls the randomness of adding edges between nodes in real graphs (similar to p,q
in SSBM graphs), and checking if being close to the proposed condition leads to more collapse. This
is a very interesting direction for future research and we will do our best in discussing it in the revised
version of the article. Thank you again for this insight.
**We are happy to answer any additional questions that you may have. Thank you.** | Summary: EDIT: I am changing my score based on the revision. I think this is a very interesting paper. In particular, it helps us understand why GNNs work well, but not super-duper well. CNNs get full NC, but GNNs don't
This work discusses neural collapse of GNNs. Much of the work on neural collapse focuses on unconstrained features which is somewhat in conflict with the constraints imposed by graph structure. The main contributions of this work are.
1. Empirical study showing that some decrease in within class variablility, but not to the same extent as in vanilla DNN
2. Graph based UFM. Prove that with optimistic model, need strict structural condition to get full NC.
3. Study gradient dynamics of graph-UFM that partially explains theoretical reaonsing for partial NC
4. Compare NC1 metrics of GNNs vs spectral clustering
Overall, this is a very good paper. It shows that GNNs exhibit partial, but not total neural collapse. In some sense, this may help explain why GNNs are near state of the art for many tasks, but nowhere near "perfect". However, I believe there is an error in the proof of theorem 3.2. I believe this error is fixable, but that the statement of the theorem will have to change. Therefore, I am strongly opposed to acceptance before this issue is fixed.
I should also mention that I did not have time to check the last two proofs as thoroughly as I would like. The authors should make sure to check them very carefully before resubmission since there was already one mistake. (I will check aggressively then.)
If these issues are addressed, I would likely be in favor of acceptance.
Strengths: This paper contributes many new ideas to understand NC in the graph setting. Much of which are highly-nontrivial extensions of the original NC such as providing a characterization of when NC occurs and showing that the loss function will decrease, but level off before zero.
Weaknesses: Major Issues
In the proof of Theorem 3.1, it is not clear what $y_c$ is. This needs to be made more clear.
In Section D.2, you should recall the definitions of the different $\Sigma$s or at least reference where they are defined. It is difficult to keep track of the many definitions.
I don't understand the inequality of the form $P(A|B)\leq P(A)$ given in 444. I also don't think you need it to obtain equation (50) which I think is correct. Could you please revise or clarify? (I am not overly concerned since the downstream equation appears to me to be correct. However, there is a similar issue for the diagonal terms which I think is more serious.)
The bound obtained in Theorem 3.2 is hard to parse an it is not obvious to me whether or not this quanity tends to zero. Perhaps you can approximate via sterlings inequality or something? Footnote 2 is unconvnincing since the terms (and the number of terms) change as n increases. I think you can argue (informally) that for large $n$ the probability that a Binomial RC is exactly equal to its mean, is equivalent to the probability that a normal RV with std $n^{-1/2}$ is within $1/n$ of its mean. Interpretation the standard deviation as average difference between the sample and the mean, this means that the probability of this occuring is on the order of $(1/n)/(1/\sqrt{n})$ which then tends to zero as $n\rightarrow \infty$.
(THIS IS THE BIG ONE): I don't think the argument used for the diagonal blocks is the proof of Theorem 3.2 regarding the upper bound for conditional probabilities is correct. For example, Set $t_{cc}=n-1$ and consider the probability of $E^c_{c,i}$ occurring given that $E^c_{c,i}$ for all $i<n$. Given that every other vertex is connected to $n-1$ other vertices of class $c$, this means that the set of all vertices in class $c$ form a clique. Therefore the probability of $E^c_{c,n}$ occurring is one. THIS ISSUE MUST BE FIXED IN ORDER FOR ME TO RECOMMEND ACCEPTANCE. Possible remedies include (i) convince me that I am mistaken (which is possible) (ii) remove the terms corresponding to the diagonal from the statement of the Lemma (in this case you should verify that this still converges to zero) (iii) add the assumptions that we are in the sparse regime to the statement of the lemma. Argue that cliques (or dense subgraphs in general) are exceptionally unlikely in this regime and that for reasonable values of $t$ the events are ``close to being independent." This is likely the best solution, but it would also require a bunch of non-trivial computations in order to make it precise (and the statement of the theorem would still need to change to reflect what you gt in these estimates) (iv) the event that the diagonal blocks satisfy the condition you need is equivalent to an erdos renyi random graph being regular. There is probably a bunch of existing work on this topic.
Minor Issues:
In the summation on line 121, adding a subscript j to the summation would make things more clear, i.e., $j \in \mathcal{N}_c(v_c,i)$
The capitlizatin is off in some of the refreeneds, in Euclidean shoudl be capitalized in ref [7]. Please fix.
Line 344 in the proof of Theorem 3.1: ``Theorem 3.1" should be capitlalized.
It might be a good idea to first explain the idea of the proof of theorem 3.2, i.e. that first you show that in the limit, every vertex in class c has the same degree. Therefore, the stated condition that all of the vertices in $v_i$ have the same \textit{fraction} or neighbors in each class $c'$ is equivalent to them having the same \textit{number} or neighbors in $c'$ (which can then be estimated more easily using formulae for binomial RVs)
Line 419 in the supplement. I think you should remove the word essentially, right? The number of neigbors IS a binomial RV.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors:
\subsection{Questions}
1. Line 44, when you say that the rules become similar to the nearest class center in the feature space do you mean the classifier is effectively an NN-classifier?
2. Why is there a $y_k$ in (1), but $y_k(V_k)$ in (2)? This seems inconsistent.
3. In the ``simpler" model (4), is the definition of $\hat{A}$ the same as in (3)? In this setting it would make sense to add self loops as in Kipf and Welling, i.e., $\hat{A}=(A+I)(D+I)^{-1}$. Otherwise, nodes don't communicate with themselves at all. Am I missing something?
4. For the sake of self-containedness, could you please briefly explain the instance-normalizaiton?
5. How important is the assumption of balanced communities in your theoretical analysis and how does this affect the applicablity of your theory to real-world imbalanced datasets
6. Hidden feature is the same size in all layers. How does this affect the results? What happens if you first increase and then decrease (or something) is there still partial NC1?
7. In the Graph UFM, if the $H_k$ are freely optimizable, why do you still need seperate matrices $W_2$ and $H_k$?v (This may be a standard thing. I am not very familiar with UFMs)
8. For the stochasitc block models, shouldn't there be some sort of concentration of measure result where for large $n$ we have $A\approx EA$? This should allow you to apply theorem 3.3 to it, right?
9. In Theorem 3.3, you obtain that $\tilde{NC}_1$ decreases which means it converges to some non-zero limit. Is there a good way to think about what this limit is? Does it, for example go to zero in the case that $E\rightarrow 0$? How does it depend on $\alpha?$
10. In line 427 of the proof of theorem 3.2, I think you need probability $1-4n^{-r}$ because of the union bound. (This is unimportant but should be fixed.)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **General note:** We sincerely thank **Reviewer 4uoY** for the detailed feedback. Due to character limits, please find the responses to all the key questions below. All the minor issues will be fixed in the revision.
**Q: I don't understand the inequality of the form $P(A|B) < P(A)$ given in 444. I also don't think you need it. Could you please revise or clarify?...**
**A:** **We agree** with the reviewer that this inequality is not needed for the analysis of the off-diagonal block (ln 444). We will remove the statement corresponding to this inequality to avoid confusion.
**Q: The bound obtained in Theorem 3.2 is hard to parse and it is not obvious to me whether or not this quantity tends to zero...**
**A:** The term corresponding to the "off-diagonal blocks" in Theorem 3.2 is given by:
\begin{align}
\left( \sum_{t=0}^n \bigg[{n \choose t}q^{t}(1-q)^{n - t}\bigg]^n\right)^{\frac{C(C-1)}{2}}
\end{align}
Here, observe that the binomial expansion of $(q + 1-q)^n$ is given by:
\begin{align}
1 = (q + 1-q)^n = \sum_{t=0}^n {n \choose t}q^{t}(1-q)^{n - t},
\end{align}
where each term, say $f(t) = {n \choose t}q^{t}(1-q)^{n - t}$ in the sum is strictly less than 1. Also, note that:
\begin{align}
1^n = \left(\sum_{t=0}^n f(t) \right)^n
&= \sum_{t=0}^n f(t)^n + \sum_{k_0+ \cdots + k_n = n, 0 \le k_0, \cdots, k_n < n} {n \choose k_0,\cdots,k_n} \prod_{t=0}^n f(t)^{k_t}.
\end{align} This gives us:
\begin{align}
\sum_{t=0}^n f(t)^n = 1 - \sum_{k_0+ \cdots + k_n = n, 0 \le k_0, \cdots, k_n < n} {n \choose k_0,\cdots,k_n} \prod_{t=0}^n f(t)^{k_t}.
\end{align}
As $n$ increases, it is true that the terms and the (number of terms) change in LHS but recall that $\sum_{t=0}^n f(t) = 1$. As the maximum value that $f(t)$ can take is $<1$, it tends to zero after taking the $n^{th}$ power, as $n$ grows larger. Thus $\mathbb{P}(\mathcal{G} \text{ obeys \textbf{C}} )$ is negligible as $N >> C$.
**Q: (THIS IS THE BIG ONE): I don't think the argument used for the diagonal blocks is the proof of Theorem 3.2 regarding the upper bound for conditional probabilities is correct...**
**A:** Thanks for pointing it out. To recall, the purpose of theorem 3.2 is to show that SSBM graphs rarely satisfy condition C. After exploring the insightful directions provided by the reviewer, we feel that just considering the off-diagonal blocks **(suggestion (ii) by reviewer)** should be sufficient to convey the idea in a simple manner, as we have shown (in the above response) that even this probability tends to 0.
**(Revised) Theorem 3.2:** Let $\mathcal{G}=(\mathcal{V}, \mathcal{E})$ be drawn from SSBM$(N, C, p, q)$. For $N >> C$, we have
\begin{align}
\mathbb{P}\left(\mathcal{G} \text{ obeys \textbf{C}} \right) < \left( \sum_{t=0}^n \bigg[{n \choose t}q^{t}(1-q)^{n - t}\bigg]^n\right)^{\frac{C(C-1)}{2}}.
\end{align} which converges to $0$ for large $n$ as shown above. Numerically, when $C=2, N = 1000, q=0.0017$, we get $\mathbb{P}(\mathcal{G} \text{ obeys \textbf{C}} ) < 2.18 \times 10^{-188}$, which is practically zero.
**A note on suggestion (iii):** We found that this line of analysis is quite non-trivial. Especially, formalizing the "almost independent" notion of the events: $\{ E_{c,1}^c (t), E_{c,2}^c (t), \cdots, E_{c,n}^c (t)\}$ is not well justified as $\prod_{i=1}^n P(E_{c,i}^c(t)) = \left[ {n \choose t}p^t(1-p)^{n-t} \right]^n$ tends to smaller values [1] for any $t$ as $n$ increases (shown in the above response).
**Regularity of erdos-renyi graphs:** We explored the literature on the existence of regular graphs on $n$ nodes and found that even asymptotic results for $p \propto \frac{\log N}{N}$ settings seem to be not well explored [2]. Also, the existing results for degree $O(\sqrt{n}), c/n$ etc are extremely non-trivial for the message we want to convey in this theorem.
[1] Dykstra, R. L., et.al "Events which are almost independent." The Annals of Statistics 1.4 (1973)
[2] Wormald, Nicholas C. "Models of random regular graphs."
**Q: It would make sense to add self-loops, otherwise, nodes don't communicate with themselves at all...?**
**A:** We have not used $\hat{A} = (A + I)(D + I)^{-1}$ explicitly as the nodes of the sampled SSBM graphs are allowed to have self-edges. We will explicitly mention this in the revision to avoid confusion.
**Q: Importance of balanced communities and practical relevance.**
**A:** This assumption is critical as we extensively leverage the properties of Kronecker products in most of our proofs. In the case of GNNs, since we show partial collapse even in the ideal scenario of balanced communities, specific "rewiring" mechanisms for the computational graph might be needed in practical settings (for ex: to satisfy condition C with our GNN design) to aid the optimizers. Additionally, a promising real-world application is a mitigation of over-smoothing by such rewiring mechanisms as NC1 indicates a maximal separation between class-mean features.
**Q: Role of varying hidden dimensions.**
**A:** Scenario 1: first 16 layers: dim = 8 + next 16 layers: dim = 16, Scenario 2: first 16 layers: dim = 8 + next 16 layers: dim=4. Final layer dim = 2 for both scenarios. We observed that both the scenarios exhibited partial NC but the "collapse" in scenario 2 was relatively higher (i.e. relatively lower NC1 values).
**Q: Intuition about the limit in Theorem 3.3 and role of $\alpha$.**
**A:** Intuitively, one can think of this limit as a state at which the $\Sigma_W$ has reduced (sufficiently) and $\Sigma_B$ has increased (sufficiently) as the exact collapse is not the desired state for $A$ that is randomly sampled from SSBM. When $E \to 0$, then the resulting $\mathbb{E}A$ theoretically satisfies condition C and the gradient flow will tend towards the NC1 state, which indicates that $\widetilde{NC}_1$ will tend to 0. Finally, $\alpha$ can be thought of as the regularization needed for the trace of between-class covariance to increase along the flow. | Rebuttal 1:
Rebuttal: **Our response to the general comments on motivation, theoretical modeling, practical significance, over-smoothing, and a fix to Theorem 3.2.**
**Modifying Theorem 3.2 to ignore the non-rigorous diagonal block case**
As pointed out by one of the reviewers, we have updated Theorem 3.2 to leverage only the off-diagonal blocks in the probability-bound calculation. The case for the diagonal block led to non-trivial assumptions and results which seemed to complicate the message of the theorem.
**The importance of neural collapse (NC).** In standard DNN settings (e.g., image classification on MNIST), the classifiers tend to exhibit NC once they perfectly classify the training data (i,e during TPT). NC includes both: "reduction" in the features’ within-class variability and "increase" (or stabilization) in the between-class variability, such that the features form certain low-dimensional geometric structures. This phenomenon has allowed the community to understand the benign effects of overparameterization and the benefits of training beyond the "zero-classification-error" stage for DNNs (e.g., cases with better test accuracy and improved adversarial robustness [1]). Importantly, we demonstrate that the case with GNNs is not similar to any previous work on NC as the structural constraints of graphs tend to hinder the ability of NC. Also, since the over-smoothing problem is common in deep GNNs, NC shows potential to address such key issues and have a promising impact.
**The data and network models.** To explore the level of NC in GNNs in a principled fashion, we start with SSBM graphs, as their properties are well-suited for theoretical GNN analysis [2]. The gUFM model presented in this paper is much more complicated than any theoretical model in previous publications on NC [4] as we retain the graph topology in the analysis. Importantly, even for our “optimistic" model, we show that the desired configuration of the penultimate layer features (in terms of optimization optimality) does not exhibit exact NC unless the graph satisfies a strict structural condition. This immediately implies an inherent difference between practical GNNs and DNNs. Also, note that it seems very difficult to derive the exact training dynamics in GNNs without trivial assumptions on the structure. Thus, we leverage the literature on SSBM graphs and present a rigorous gradient flow analysis to explain the partial collapse observed in our experiments. This analysis is already a major step beyond analyzing only the minimizers of the model.
**Significance.**
1. The results in Theorem 3.1 on the strict structural condition of the graph and the gradient flow analysis in Theorem 3.3 indicates that contrary to the case of DNNs where MSE loss leads to collapsed minimizers, the feature evolution in GNNs during training is leading only to partial collapse (even in the simplified/optimistic setting). This result sheds light on the impact of structural conditions on the ideal feature configurations of an expressive GNN.
2. Additionally, observe that attaining neural collapse solves the over-smoothing problem as neural collapse represents the maximal separation between feature class means in addition to zero within-class feature variability. This is indeed an important takeaway to the GNN community, which we will clarify in the revision.
3. A recent work by Ma et,al [3] showed that homophily might not be necessary for good GNN performance on node classification tasks. By leveraging Theorem 3.1 from our work, we can observe that the structural condition (dubbed condition C) required for collapsed minimizers addresses both the homophilic and heterophilic graphs. Thus, our results provide insight into these surprising empirical results.
4. Our work can potentially shed light on an ideal "graph-rewiring" strategy for improved GNN performance as we know the nature of the minimizers when condition C is satisfied by the computational graph.
5. We believe that extensions of our analysis to semi-supervised /self-supervised settings would be of great value to the community. For instance, the structural conditions that we analyze in this work can be used for graph augmentation in graph contrastive learning.
**Oversmoothing and neural collapse.**
Over-smoothing has been widely studied in the literature to model the reduction of within-class and between-class feature variability across layers of a GNN during training. On the contrary, NC studies the reduction in within-class but an "increase" in between-class feature variability (especially of the penultimate layer). Interestingly, note that when the penultimate layer features exhibit neural collapse, we can address the over-smoothing problem. In addition to section 3, which highlights the NC results for the penultimate layer during training, In section 4, we presented a layer-wise feature evolution in a well-trained GNN. This analysis considers well-trained GNNs in inductive settings for layer-wise analysis and inherently differs from the over-smoothing analysis of [2] which considers networks in the training phase of semi-supervised settings.
**Overall, we would like to thank the reviewers for their insightful discussions and for strengthening the quality of our paper.**
[1] Papyan, Vardan, et,al "Prevalence of neural collapse during the terminal phase of deep learning training." PNAS 2020
[2] Keriven, Nicolas. "Not too little, not too much: a theoretical analysis of graph (over) smoothing." NeurIPS (2022)
[3] Yao Ma, et al. Is homophily a necessity for graph neural networks? (ICLR), 2022
[4] Han, X. Y., et.al. "Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path." ICLR. 2021 | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Is RLHF More Difficult than Standard RL? A Theoretical Perspective | Accept (poster) | Summary: The authors consider the RLHF setting. First, in the case where there (a) exists a ground-truth utility function and (b) feedback is positive with higher probability when there is a larger difference in rewards, they derive an algorithm to iteratively winnow down a ball of reward functions. When such structure cannot be assumed, they derive a reduction to adversarial MDPS for solving for the von Neumann winner.
Strengths: (+) The paper is clearly written and the algorithms are intuitive and reasonable.
(+) I haven't seen someone consider the von Neumann winner before in the RLHF space.
Weaknesses: (-) I think there's some weird formatting on the top of pg. 8 as the line numbers stop. Could you fix this? Also, I think you're missing a superscript $(i)$ in the definition of a partial trajectory. Also, please add some space between line 343 and the equation below.
(-) There are no experiments whatsoever.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1) Could you add in https://arxiv.org/pdf/2305.18505.pdf and https://arxiv.org/pdf/2305.14816.pdf to your related work section and discuss how your approach differs from theirs?
2) Do you get any sort of agnostic guarantee for Alg. 1 in the non-realizable case?
3) Are "oracle complexity" (line 169) and "query complexity" (footnote 2) the same thing? If so, could you pick a term and stick to it?
4) Imagine if instead of receiving stochastic feedback, you received a deterministic (+/-) label from the oracle. While this is a simpler setting in some sense, I'm not sure if you'd still be able to compare things to a fixed initial trajectory. Do you have any thoughts on how you could modify Alg. 1 to handle this setting?
5) Could you clarify how you could run Alg. 1 with k-wise comparison feedback? I found section 3.3 of the paper unfortunately vague.
6) Footnote 3's reasoning is a bit artificial. Could you instead motivate the setting you consider by noting that for empirical RLHF, people usually treat it as a "bandit" problem and only give feedback at the ending?
7) Could you run something like NPG instead as a no-regret algorithm for the adversarial MDPs your reduction requires solving? If so, that might be an illustrative point to add in given it's a bit closer to the PPO-style algorithms people use in practical RLHF. Also, given everything is fully observed, is there an algorithm that gets a tighter rate than OMLE? It sort of feels like you're using a heavier hammer than necessary.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for their positive evaluation and detailed feedback. We would address the reviewer’s concerns as follows.
**Q1. Formatting issues.**
**A1.** Thank you for pointing out these issues. We will correct them in the final version since we cannot update the paper during the rebuttal process.
**Q2. Additional related work.**
**A2.** Thank you for bringing these two papers to our attention. We will revise the related work section accordingly in the final version.
**Q3. “Do you get any sort of agnostic guarantee for Alg. 1 in the non-realizable case?”**
**A3.** If the ground-truth reward function is not realizable but known to have a small approximation error, i.e. $\Vert r^\star - r\Vert_\infty<\Delta$ for some $r\in \mathcal{R}$, we believe it is possible to obtain results on learning the optimal policy up to error $O(\sqrt{d_r} \Delta)$, where the approximation error is amplified by the square root of the eluder dimension similar to the case in bandits with misspecification (Lattimore et al., 2020).
**Q4. On “oracle complexity” and “query complexity”.**
**A4.** Yes, they are referring to the same concept. We will modify Line 169 to “query complexity”.
**Q5. On deterministic labels.**
**A5.** If labels are deterministic in the utility-based setting, we have established an impossibility result (Lemma 3) where it is impossible to identify the optimal policy. Intuitively, this is because when labels are deterministic, only ordinal information about the reward function is kept and the rest is lost. For instance, one can only learn that $\tau_1$ is preferred over $\tau_2$ and $\tau_2$ over $\tau_3$. However, without information on the actual reward differences, one cannot compare a policy that generates $\tau_2$, and one that generates $\tau_1$ with probability 0.5 and $\tau_3$ with probability 0.5.
On the other hand, in the non utility-based setting, our results can handle deterministic or arbitrary link functions. However, the solution concept would be different from that of the utility-based setting.
**Q6. Clarification on k-wise comparison.**
**A6.** To adapt Algorithm 1 to k-wise comparison, we simply need to change Line 7 from
> Query comparison oracle $m$ times on $\tau$ and $\tau_0$; compute average comparison result $\bar{o}$
to
> For $i$ from 1 to $\lceil 2m/k\rceil$:
> - Query comparison oracle on $(\tau,\tau_0,\cdots,\tau,\tau_0)$, receive output $\phi$
> - Compute $o_i\gets \frac{2}{k}\sum_{j=1}^{k/2} I[\phi(2j-1)>\phi(2j)]$
>
> Compute the average $\bar{o}\gets \frac{1}{\lceil 2m/k\rceil} \sum_{i=1}^{\lceil 2m/k\rceil} o_i$
Here, instead of querying a two-way comparison oracle for $m$ times, we alternatively query a $k$-wise comparison oracle for $\lceil 2m/k\rceil$ times. By Proposition 5, the computed $\bar{o}$ has the same statistical property as if using $k/2 \cdot \lceil 2m/k\rceil$ independent samples from a two-way comparison oracle.
**Q7. On NPG as an adversarial MDP solver.**
**A7.** There are policy optimization style algorithms for adversarial MDPs in the tabular setting (see e.g. Algorithm 3 in Efroni et al. 2020), which could indeed be used as the solver required by the reduction in Section 4.2.
**Q8. "Also, given everything is fully observed, is there an algorithm that gets a tighter rate than OMLE? It sort of feels like you're using a heavier hammer than necessary."**
**A8.** In certain cases like tabular MDPs and kernel linear MDPs, we believe it is possible to obtain sharper rates by replacing the MLE-based confidence sets with ones that are specially designed to exploit the problem structures. For example, we can use Bernstein inequality to construct tighter confidence intervals for each entry $P(s’ \mid s,a)$ of the transition matrix in the tabular setting. Nonetheless, the current version uses OMLE-style algorithms because of its algorithmic simplicity and its generality. Specifically, we can use one single algorithm (OMLE) to provide polynomial sample-efficiency guarantees for many distinctive RLHF problems, e.g., tabular MDPs, factored MDPs, linear kernel MDPs etc. Moreover, it also directly generalizes to RLHF under partial observability, e.g., observable POMDPs and decodable POMDPs.
---
Lattimore et al. Learning with Good Feature Representations in Bandits and in RL with a Generative Model. 2020.
Efroni et al. Optimistic Policy Optimization with Bandit Feedback. 2020.
Lin Yang, Mengdi Wang. Reinforcement Learning in Feature Space: Matrix Bandit, Kernels, and Regret Bound. 2020.
---
Rebuttal Comment 1.1:
Title: Re:
Comment: I thank the authors for their rebuttal. After reading it and the comments of the other reviewers, I would be most comfortable with keeping my score where it is.
Please remember to make the changes you promised to above! I would also add in some of the text from A6/A7 to the main paper, as well as updating Footnote 3, when you have the chance. | Summary: Refined writing: The objective of this paper is to establish a theoretical foundation for reinforcement learning based on human feedback preferences. The authors conduct an analysis on two aspects: (1) utility-based preferences in tabular MDPs, linear MDPs, and MDPs with low Bellman-Ruler dimension, and (2) general preferences considering the assumption of the calculation of von Neumann winners can be reduced to finding NEs of games with independent transition dynamics.
Strengths: Refined writing: The current limitations in the theoretical understanding of reinforcement learning from human feedback have prompted the need for addressing crucial questions, particularly in light of the significant success achieved by RLHF in applications such as ChatGPT. This paper effectively tackles these important questions, contributing to a deeper understanding of the subject matter.
Weaknesses: The research topic of this paper is indeed interesting, and the reviewer initially had high expectations. However, there is a significant discrepancy between the paper's title and the actual content presented in the theoretical analyses.
The first part of analyses primarily focus on the utility-based preferences, which is applicable to simple linear MDPs or MDPs with lower Bellman-Ruler dimensions. It is challenging to extend the conclusion that "human query feedback does not scale with the sample complexity of the reinforcement learning algorithm" to more powerful function approximators commonly used in complex learning settings. Additionally, it raises the question of whether human feedback is even necessary for learning in these simple MDPs. The reviewer thinks that high-cost factors like human feedback are typically introduced in sophisticated tasks such as large language models and autonomous driving.
Certain sections of the paper, particularly the discussion on general preferences, are challenging to follow. There is a lack of exploration regarding how the theoretical analyses and computation of the von Neumann winner can be translated into an algorithm for relating preferences to rewards. This omission makes it difficult to assess the practicality and applicability of the proposed method, as it is based on the assumption of factorizable underlying Markov games. In realistic scenarios, independent transition dynamics are not commonly observed. This raises concerns about the completeness of this section in the paper.
If the results of the paper are limited to simple MDPs, the authors may consider changing the title to reflect this specificity.
Based on the reviewer's assessment, the current state of the paper does not warrant immediate publication. The authors are encouraged to reorganize their work and submit a more comprehensive version to top machine learning venues in the future.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Please find the questions in the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: This paper didn't discuss limitations of their analyses in a separate section, but it clearly states their assumptions when introducing the theoretical findings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and we will address them below.
**Q1.** The first part of analyses primarily focus on the utility-based preferences, which is applicable to simple linear MDPs or MDPs with lower Bellman-Ruler dimensions. It is challenging to extend the conclusion that "human query feedback does not scale with the sample complexity of the reinforcement learning algorithm" to more powerful function approximators commonly used in complex learning settings.
**A1.** Theorem 4 shows that the query complexity of P2R Interface only scales with the complexity of learning the reward function and is independent of the complexity of the RL algorithm, regardless of the RL tasks and algorithms we are considering. As a result, even in a complex learning setting with powerful function approximators, the query complexity of P2R Interface is still independent of the RL algorithms. Nonetheless, P2R Interface may have larger query complexity in more complex tasks because the reward function could be harder to learn.
**Q2.** Additionally, it raises the question of whether human feedback is even necessary for learning in these simple MDPs.
**A2.** This paper studies the setting of RLHF where human feedback is the only information available and rewards either are not observable (utility-based setting) or may not exist (non utility-based setting). Therefore, learning is impossible without using human feedback in the setting of RLHF.
**Q3.** There is a lack of exploration regarding how the theoretical analyses and computation of the von Neumann winner can be translated into an algorithm for relating preferences to rewards.
**A3.** Experiments have shown that human preferences can be intransitive [Tversky, 1969], which implies there is NO reward function that can represent human preference in certain tasks. As a result, we adopt a new solution concept—von Neumann winner, and provide two schemes to learn it: (1) reduce learning von Neumann winner to learning factorizable Markov games and then apply adversarial MDP algorithms; (2) a preference-based version of the OMLE algorithm. Finally, we remark that it is impossible to relate preferences to rewards because no reward function can represent intransitive preference.
**Q4.** This omission makes it difficult to assess the practicality and applicability of the proposed method, as it is based on the assumption of factorizable underlying Markov games. In realistic scenarios, independent transition dynamics are not commonly observed.
**A4.** We neither assume factorizable Markov games nor assume independent transition dynamics. Instead, we reduce the problem of finding von Neumann winners to solving factorizable Markov games (Proposition 9). Moreover, the reduction holds for any RLHF problems without any assumption.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: The authors' response has effectively addressed many concerns raised by the reviewer. Nevertheless, the reviewer still thinks there is a disparity between the paper's title and the theorems presented within. While it is true that the query complexity of the proposed Interface remains independent of the choice of RL algorithms, it is intrinsically linked to the intricacy of the reward function. In cases beyond linear MDPs or MDPs with lower Bellman-Ruler dimensions, the reward function is expected to exhibit complexity, potentially resulting in poor scalability of the presented conclusions. In contrast, the title imparts the impression that the authors have resolved this pivotal question. The motivating examples in the paper is on recommendation systems, image generation, robotics, and large language models. It is unlikely that MDPs are linear here.
Moreover, in the simple MDP cases, the authors put that "This paper studies the setting of RLHF where human feedback is the only information available and rewards either are not observable (utility-based setting) or may not exist (non utility-based setting)". Is such a setting common in real-world applications? Why do we even care about this setting? Why not directly design a reward function for these MDPs, which is not impossible given that the underlying MDP is not complex.
In light of these considerations, the reviewer's rating has been adjusted to "4". The reviewer maintains the viewpoint that this paper poses an intriguing question, acknowledges the inherent challenges in conducting theoretical analyses, and commends the efforts invested in addressing this question. However, as it stands, the paper appears to be unprepared for publication. The authors could also enhance the writing, potentially by focusing solely on presenting pivotal and insightful findings in the main paper.
---
Reply to Comment 1.1.1:
Title: Further clarification
Comment: We thank the reviewer for their time and effort, and we are glad that our previous rebuttal has addressed many of their concerns. We would like to make two further clarifications here.
**1. Complexity of the Underlying MDP:**
> The motivating examples in the paper are on recommendation systems, image generation, robotics, and large language models. It is unlikely that MDPs are linear here.
We wish to emphasize that our main theoretical result does not in any way assume linearity of the MDP. Instead, the guarantees hold as long as there exist efficient reward-based RL algorithms. Low Bellman-Eluder dimension MDPs are simply an example where provably efficient RL algorithms are known. Therefore, the complexity of the MDPs for recommendation systems, image generation, robotics, and large language models does not detract from the general applicability of our main finding, which is **RLHF is not statistically harder than standard RL**.
**2. Relevance of the RLHF Setting:**
> The authors put that "This paper studies the setting of RLHF where human feedback is the only information available and rewards either are not observable (utility-based setting) or may not exist (non utility-based setting)". Is such a setting common in real-world applications? Why do we even care about this setting?
Indeed, the setting we explore is prevalent in a myriad of real-world applications. For instance, in scenarios like tuning language models or improving human-robot interactions, manually designing a reward function a priori is difficult if not impossible, which makes *learning* a reward function from human feedback extremely attractive. Exploiting human feedback sidesteps the challenges of manually specifying a reward function, which can be error-prone and might not fully capture nuanced human preferences. This is precisely the key motivation of RLHF. The emphasis on RLHF is driven by its widespread utility in these and other applications, making our study both relevant and timely.
We appreciate the reviewer's insightful feedback and hope these clarifications further elucidate the novelty and applicability of our work. We remain committed to refining our manuscript to better convey our findings and address any ambiguities. | Summary: The authors study the problem of learning in preference-based RL and investigate the question of whether preference based RL is any harder that reward based RL. They show that for preferences that are based on an underlying reward function, preference based RL is no harder than reward based RL for most of the theoretical RL settings including tabular MDPs, linear MDPs, MDPs with finite eluder dimension. They propose an efficient algorithm that can solve this problem by reducing it to the reward based RL and solving the reward version instead. Further, they demonstrate that this reduction does not incur any additional sample complexity, requiring humans to provide preference on only a small portion of trajectories collected by the algorithm.
For general preference function, they propose a framework of factored Multi-Agent MDP to find the “von Neumann winner” solution concept. They further provide algorithms based on Adversarial MDP and Optimistic MLE to solve the problem in the setting where preference are entirely based on last state and on complete trajectory respectively.
Strengths: - The paper is well written and except minor notation issues it is a good read.
- The authors provide algorithms to solve the preference based RL problem with a theoretical guarantee on sample and computation complexity in different RL settings including finite state, linear and bounded eluder dimension settings.
- The authors also study this problem in general preference feedback setting and propose the solution concept of “von Neumann winner”. They further propose algorithms when preference depend on the final state or entirety of the two compared trajectories.
Weaknesses: - The paper assumes that humans can provide preferential feedback at trajectory level which is not always easy.
- The concept of Eluder dimension and following results are not very intuitive.
- The paper lacks any experimental results but the theoretical contribution is strong enough.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. In line 116-117 : Why is underlying reward function not state-action Markovian? The corresponding value notation is not correct.
2. Can you compare two states as in line 139 or is it two (state,action) pair?
3. Why the input trajectories needs to be feasible with respect to underlying environment? Where is this assumption used?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: - The paper does not address the setting when users can only provide preference at state and not trajectory level.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for their positive evaluation and detailed feedback. We would address the reviewer’s questions as follows.
**Q1.** “The paper assumes that humans can provide preferential feedback at trajectory level”
**A1.** We would like to clarify that in the utility-based setting, our results could apply to the setting where human labellers provide feedback on a state-action pair, where the preference is assumed to be based on the immediate reward (see Line 133-136 and Remark 1). We present primarily the results for trajectory-based feedback setting since this is the setting considered in most prior work (see e.g. Pacchiano et al. 2021, Novoseller et al. 2020).
For the non-utility-based setting, we do have results (Section 4.2) on the case where the preference is based on the final-state of the trajectory. Without this assumption, it is highly unclear how one can reason about preferences of trajectory based on human feedback at a state-action pair level.
**Q2.** “In line 116-117 : Why is the underlying reward function not state-action Markovian?”
**A2.** Thank you for pointing out the notational inconsistency. Indeed, in Line 116-117, the reward function $r^\star$ should be state-action based (that is $r^\star = [H]\times \mathcal{S}\times \mathcal{A} \to [0,1]$).
**Q3.** “Can you compare two states as in line 139 or is it two (state,action) pair?”
**A3.** Thank you for pointing out this typo. Line 139 should be modified as “... evaluator prefers $\tau$ over $\tau’$ is”.
**Q4.** “Why does the input trajectories need to be feasible with respect to the underlying environment?”
**A4.** The feasibility requirement is more of a restriction on algorithms (that our algorithms meet) than an assumption. The motivation of this condition is to rule out trivial but impractical strategies. For instance, one can learn the full reward function using *no* samples from the MDP by iteratively querying the trajectory with the highest uncertainty in the whole trajectory space. However, such synthesized trajectories could be random sequences of pixels or streams of incoherent speech, which human labelers cannot reliably evaluate. In other words, the feasibility requirement can be thought of as a weakening of Definition 1: instead of assuming the comparison oracle to be valid on the whole trajectory space, we can alternatively assume that it is only valid on trajectories that are feasible.
---
Aldo Pacchiano, Aadirupa Saha, and Jonathan Lee. Dueling rl: reinforcement learning with trajectory preferences. 2021.
Ellen Novoseller, Yibing Wei, Yanan Sui, Yisong Yue, and Joel Burdick. Dueling Posterior Sampling for Preference-Based Reinforcement Learning. 2020. | Summary: The authors attempt to show the conditions under which RLHF is theoretically identical to standard RL where a reward function is specified as a part of the environment. The algorithm P2R Interface is given as way to learn from preference feedback such that all requirements are met for RLHF to be identical to standard RL. The reduction of RLHF to standard RL is discussed in the context of utility-baed preferences and general preferences that do not meet the requirements of a utility function.
Strengths: - The paper includes an extensive related works section.
Weaknesses: - The paper is difficult for me to follow and understand. It is very full of jargon for which no explanation is provided.
- It is not well motivated why the "reduction" the authors propose to make is necessary and the benefits it carries.
- A discussion section to wrap up what has been shown in the paper would be helpful.
- Figure 1 is missing.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - In the utility-based preferences scenario the authors state that the trajectories given as input to the comparison oracle must generated by the policy. How does this work in the case of Ouyang et al. [2022]'s approach where the reward model is learned in advance and therefore are not generated by the policy?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: - The paper is non-trivial for someone not an expert on the specific topic of the paper to follow. It would be great for the paper to be more easily understandable by those with a background in preference-based RL, so that they may incorporate the learnings into their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1.** The paper is difficult for me to follow and understand. It is very full of jargon for which no explanation is provided.
**A1.** We kindly ask the reviewer to specify the sections of the paper they found ambiguous. We are eager to provide clarifications where needed.
**Q2.** It is not well motivated why the "reduction" the authors propose to make is necessary and the benefits it carries.
**A2.** The term “reduction’’ in computer science refers to a scheme of transforming one problem into another problem. In our paper, the reduction is used to convert the RLHF problem to a standard reward-aware RL problem. The benefits of designing such a reduction (as opposed to a single RLHF algorithm) is that any standard RL algorithm can be combined with the reduction to derive a RLHF algorithm.
**Q3.** A discussion section to wrap up what has been shown in the paper would be helpful.
**A3.** Thank you for the suggestion. We will add the following conclusion section in the revision:
This paper studies RLHF via efficient reductions. For utility based preferences, we introduce a Preference-to-Reward Interface which reduces preference based RL to standard reward-based RL. Our results are amenable to function approximation and incur no additional sample complexity. For general preferences without underlying rewards, we reduce finding the von Neumann winner to finding restricted Nash equilibrium in a class of Markov games. This can be more concretely solved by adversarial MDP algorithms if the preference depends solely on the final state, and by optimistic MLE for preferences that depend on the whole trajectory. Our results demonstrate that RLHF, from both utility-based and general preferences, can be readily solved under standard assumptions and by existing algorithmic techniques in RL theory literature. This suggests that RLHF is not much harder than standard RL in the complexity sense, and needs not to be more complicated in the algorithmic sense. Consequently, our findings partially answer our main query: RLHF may not be more difficult than standard RL.
**Q4.** Figure 1 is missing.
**A4.** Please refer to Appendix A in the supplementary materials for Figure 1.
**Q5.** In the utility-based preferences scenario the authors state that the trajectories given as input to the comparison oracle must be generated by the policy. How does this work in the case of Ouyang et al. [2022]'s approach where the reward model is learned in advance and therefore are not generated by the policy?
**A5.** Our algorithms (e.g., P2R Interface) only need to query the comparison oracle with trajectories generated by the policies. This means that all results in this paper still hold without any change even if we allow the comparison oracle to compare arbitrary trajectories, as is the case in Ouyang et al. [2022].
---
Rebuttal Comment 1.1:
Comment: A1. We kindly ask the reviewer to specify the sections of the paper they found ambiguous. We are eager to provide clarifications where needed.
- It would be great to have an example or two comparing the two approaches here with some citations: "These works typically develop specialized algorithms and analysis in a white-box fashion, instead of building on existing techniques in standard RL." What are examples of how they specialized and what examples of the standard RL techniques that should be build upon?
- It would be helpful to have more explanation of what is meant by "general (arbitrary) preferences" earlier in the paper.
- What is a "confidence set of rewards Br"? Is this an ensemble of reward models and you are checking for ensemble agreement? Does this mean there will likely be more queries to the oracle at early stages of policy/reward training?
- Why compare against "a fixed trajectory τ0"? How is the fixed trajectory selected? Does this fixed trajectory need to have some quality guarantees?
A4. Please refer to Appendix A in the supplementary materials for Figure 1.
- If a main figure for the paper is in the Appendix then that should be called out when the figure is referenced.
---
Reply to Comment 1.1.1:
Title: Response to further questions
Comment: We thank the reviewer for the follow-up response. The questions will be addressed below.
> It would be great to have an example or two comparing the two approaches here with some citations.
Examples comparing our approach and that of existing studies can be found in Section 1.1, L85-99. The main difference between our approach and previous works mentioned in L85-99 is that our work is reduction-based and can directly work with existing reward-based RL algorithms.
> It would be helpful to have more explanation of what is meant by "general (arbitrary) preferences" earlier in the paper.
The term “general preferences” is used in contrast to “utility-based preferences”, which are briefly explained in Line 45. We will include an additional note in Line 55 in the revision for better clarity.
> What is a "confidence set of rewards Br"?
The confidence set of rewards $B_r$ is a set of reward functions that is maintained by the algorithm (see Line 10 of Algorithm 1). For instance, if the reward function class is linear, the confidence set $B_r$ would correspond to an ellipsoid in the parameter space.
> Does this mean there will likely be more queries to the oracle at early stages of policy/reward training?
It would be hard to give a definite answer for MDPs in general. The oracle would be queried when the online RL algorithm generates trajectories (or state-action pairs) that are “novel” for the P2R interface. When such trajectories are visited would depend heavily on the exploration strategy used by the online RL algorithm.
> Why compare against "a fixed trajectory τ0"?
The fixed trajectory $\tau_0$ used in Algorithm 1 is collected in Line 2 at the start of the algorithm by executing a uniformly random policy. By Definition 1, the comparison oracle returns a result based on the *reward differences” of two trajectories. Therefore by comparing against a fixed trajectory $\tau_0$, we could learn $r(\cdot)-r(\tau_0)$ — the groundtruth reward function up to a fixed offset. The trajectory $\tau_0$ can be an arbitrary trajectory in principle due to Assumption 2. The reason that we used a trajectory generated by a random policy is to meet the feasibility criterion discussed in Line 130.
> If a main figure for the paper is in the Appendix then that should be called out when the figure is referenced.
We will clarify that Figure 1 is in Appendix A in references to it. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Meta-in-context learning in large language models | Accept (poster) | Summary: The paper studies an ability of LLMs, called meta-in-context learning, which showcases that LLMs can recursively improve their in-context learning with demonstrations. The authors illustrate this capability with a regression task and a two-armed bandit task. The analysis demonstrates that LLMs are not only able to learn the task (underlying function) from the examples in this task but also can leverage the examples from other tasks (underlying functions).
Strengths: 1. The paper highlights the meta-in-context learning capability of LLMs. The capability allows LLMs to be recursively improved via in-context learning with more tasks in context.
2, The paper is clearly written and easy to follow.
Weaknesses: 1. The analysis in the paper demonstrates that LLMs can leverage the examples from other tasks, however, the definition of a task appears to be rather limited in this paper. In the linear regression experiment, for example, a task represents the parameters within a linear function.
2. The authors introduce the concept of meta-in-context learning as a novel ability of LLMs, but it remains unclear how this differs from the traditional in-context learning ability. In this paper, each regression function is regarded as a task with a few examples following the function, and LLMs are found to be able to learn from other tasks. However, this ability seems to have been demonstrated in various other applications. For instance, LLMs can learn to generate dialog responses by observing example responses from other dialogs. If we apply the terminology used in this paper, each dialog can also be considered a task, with each individual utterance serving as an example.
While I think the analysis in the regression and two-armed bandit tasks is commendable, it would be valuable to see such analysis on more realistic tasks, such as dialog generation (a dialog is a task), and passage/image question answering (a passage or image is a task).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. On line 118, it is mentioned that the GPT-3 is not told that underlying functions are linear, distinct from BLR. However, this raises the question of whether GPT-3 possesses the capability to handle non-linear functions. Additionally, if GPT-3 sees a few tasks with linear functions, is GPT-3 able to generalize to a non-linear function in a new task? The meta-in-context learning ability is more valuable if the tasks can be more different.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 6CRk,
We thank the reviewer for their helpful comments and we have made a response to each of their comments along with suggested changes to the paper:
> 1. The paper highlights the meta-in-context learning capability of LLMs. The capability allows LLMs to be recursively improved via in-context learning with more tasks in context. 2, The paper is clearly written and easy to follow.
[...] I think the analysis in the regression and two-armed bandit tasks is commendable
We appreciate the reviewer’s comments on the clarity and presentation of the paper as well as exposing the two-armed bandit task as commendable.
> 2. The authors introduce the concept of meta-in-context learning as a novel ability of LLMs, but it remains unclear how this differs from the traditional in-context learning ability. In this paper, each regression function is regarded as a task with a few examples following the function, and LLMs are found to be able to learn from other tasks. However, this ability seems to have been demonstrated in various other applications. For instance, LLMs can learn to generate dialog responses by observing example responses from other dialogs. If we apply the terminology used in this paper, each dialog can also be considered a task, with each individual utterance serving as an example.
Thank you for this comment. We view the relationship between in-context learning and meta-in-context learning as similar to the relationship between Bayesian inference and hierarchical Bayesian inference. They are both based on the same algorithmic principles applied at different conceptual levels. In-context learning and Bayesian inference are used for within-task learning, while meta-in-context learning and hierarchical Bayesian inference are used to pool information across tasks. Formulating explicit tasks and episodes in the way that we do allows us to quantify the degree of meta-in-context learning, whereas simply having dialogue responses generated following from other dialogue is hard to quantify.
> While I think the analysis in the regression and two-armed bandit tasks is commendable, it would be valuable to see such analysis on more realistic tasks, such as dialog generation (a dialog is a task), and passage/image question answering (a passage or image is a task).
We appreciate this suggestion which was echoed by all other reviewers as well. We have therefore conducted additional simulations of meta-in-context learning on the Massive Multitask Language Understanding (MMLU) benchmark. We will add the following section to our revised paper:
"**Meta-in-context learning on natural language processing benchmarks**:
Finally, we examined whether meta-in-context learning also improves upon in-context-learning on standard natural language processing tasks. To test this, we conducted an experiment on the Massive Multitask Language Understanding (MMLU) benchmark \cite{hendrycks2020measuring}.
**Methods**:
We focus on the tasks from the STEM supercategory as other supercategories -- together with the addition of meta-in-context learning -- cause prompt lengths to exceed the limits of GPT-3. For the in-context learning simulations, we provided the model with $k \in {0, 1, 2}$ examples from the same category before prompting it on the test question. For the meta-in-context learning simulations, we additionally prepended three examples of two tasks from \emph{different} categories.
**Results**:
Figure 9 summarizes our results. We found that meta-in-context learning was in general beneficial in terms of performance. The biggest benefit was observed in the zero-shot case, in which meta-in-context learning reached an accuracy of $55.1$ percent outperforming in-context by $22.4$ percent. This illustrates that LLMs do not necessarily have to be prompted by examples from the same category but that they can also transfer some knowledge from different categories."
The figure with the corresponding results can be found in the attached PDF under Figure 9.
> On line 118, it is mentioned that the GPT-3 is not told that underlying functions are linear, distinct from BLR. However, this raises the question of whether GPT-3 possesses the capability to handle non-linear functions. Additionally, if GPT-3 sees a few tasks with linear functions, is GPT-3 able to generalize to a non-linear function in a new task? The meta-in-context learning ability is more valuable if the tasks can be more different.
We appreciate the feedback and agree that additional evaluation on non-linear functions does make the analysis more comprehensive. We have therefore added the following section to our revised paper:
"**Meta-in-context learning with non-linear functions**:
We also aimed to determine whether GPT-3 could use meta-in-context learning to adapt to non-linear functions. In the Supplementary material, we present an analysis using quadratic functions conducted within the same experimental framework. In summary, we find that GPT-3 indeed exhibits the ability to meta-in-context learn non-linear functions."
For a comprehensive visualization of these refined insights, we refer readers to Figure 7 in the attached PDF.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and the efforts to address my concerns. I think the new experiments can better explain the effectiveness of the meta ICL. I still have a question about the results in MMLU. The meta ICL only performs better than ICL for zero-shot, where ICL sees no demonstration examples and meta ICL sees a few examples although they are from another category. Given the small difference between these categories (same task format but different subjects, e.g. physics vs math), it won't be surprising that having examples is better than the zero-shot setting. I have updated my score based on the other experiments but I am looking forward to more elaboration on the MMLU results.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate the feedback provided by the reviewer, which we believe resonates consistently with their earlier observations about the definition of a task and its distinction from conventional in-context learning. While we acknowledge the similarity between our MMLU results and previous empirical findings for real-world data, as demonstrated by the example dialogue the reviewer provided, we maintain our position that our framework offers a unique quantitative assessment of the extent of meta-in-context learning. This contrasts with previous research, which primarily reported performance improvements without delving into the underlying mechanisms. Below, we hope to convince the reviewer that the performance improvement is not solely coming from a very close task that can be seen as an example but also from how related the task is, even for NLP experiments.
**Elaboration on MMLU results: Task similarity has a significant effect on accuracy on the MMLU benchmark:**
To elaborate on the benefits of our analysis, we wanted to quantify again the use of task similarity for meta-in-context learning in the MMLU benchmark. As explained before, due to context size and price constraints, the text-davinci-002 engine was only tested on the STEM subcategory of the MMLU benchmark which does not allow enough differentiation between tasks as the reviewer mentioned. Therefore, we ran two open-source models with larger context sizes on the entire MMLU benchmark, namely mpt-30b and Falcon-40b. The task name was given to the text-embedding-ada-002 and the task similarity was computed as the average negative L2 norm between the task name's embedding and its previous task name's embeddings. For both, we see a significant effect ( Beta=0.1918 +- 0.076 & Beta=0.1718 +- 0.077 with both p>0.05). This suggests that meta-in-context learning does not only leverage the context but also the relationship to the task's context even for NLP tasks as we would expect in a meta-learning framework. This analysis will be included in the Supplementary material. If there are any remaining inquiries, we are available for further discussion. | Summary: The authors demonstrate that the in-context learning abilities of large language models can be recursively improved via in-context learning itself.
The paper misses the method section. I don't know the details and cannot tell the difference from previous work.
I didn't get the novel part of the method. It seems an empirical study.
The authors had better do some experiments on benchmark datasets, such as the datasets from GPT-3 paper.
Strengths: 1. The problem explored is critical.
2. The experimental analyses are interesing.
Weaknesses: 1. Missing model section.
2. The method is more of an empirical study.
3. The experiments are not solid. The authors focus on the GPT-3 model exploration but miss comparison to any benchmark dataset from GPT-3 paper. The shot number, context length, and language understanding in benchmark datasets are all critical issues to study. And GPT-3 is released quite a long time ago. Exploring recent LLMs, such as ChatGPT or GPT-4, would be better.
4. The finding is similar to GPT-3 on domain adaptation, such as the work from "Prompting GPT-3 To Be Reliable".
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Can you formulate meta-in-context learning?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I didn't get the novel contribution of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer NP3F,
We thank the reviewer for their helpful comments and we have made a response for each of their comments along with suggested changes to the paper:
> Strengths:
The problem explored is critical.
The experimental analyses are interesing.
We thank the reviewer for exposing the research problem as critical and the analyses as interesting.
> Weaknesses:
Missing model section.
The method is more of an empirical study.
We thank the reviewer for the feedback. However, we do not view this as a weakness. NeurIPS publishes hundreds of empirical studies every year that do not propose new models.
> The experiments are not solid. The authors focus on the GPT-3 model exploration but miss comparison to any benchmark dataset from GPT-3 paper. The shot number, context length, and language understanding in benchmark datasets are all critical issues to study.”
We appreciate the criticism. In our revised paper, we have conducted additional simulations of meta-in-context learning on the Massive Multitask Language Understanding (MMLU) benchmark. We will add the following section to our revised paper:
"**Meta-in-context learning on natural language processing benchmarks**:
Finally, we examined whether meta-in-context learning also improves upon in-context-learning on standard natural language processing tasks. To test this, we conducted an experiment on the Massive Multitask Language Understanding (MMLU) benchmark \cite{hendrycks2020measuring}.
**Methods**:
We focus on the tasks from the STEM supercategory as other supercategories -- together with the addition of meta-in-context learning -- cause prompt lengths to exceed the limits of GPT-3. For the in-context learning simulations, we provided the model with $k \in {0, 1, 2}$ examples from the same category before prompting it on the test question. For the meta-in-context learning simulations, we additionally prepended three examples of two tasks from \emph{different} categories.
**Results**:
Figure 9 summarizes our results. We found that meta-in-context learning was in general beneficial in terms of performance. The biggest benefit was observed in the zero-shot case, in which meta-in-context learning reached an accuracy of $55.1$ percent outperforming in-context by $22.4$ percent. This illustrates that LLMs do not necessarily have to be prompted by examples from the same category but that they can also transfer some knowledge from different categories."
The figure with the corresponding results can be found in the attached PDF under Figure 9.
> And GPT-3 is released quite a long time ago. Exploring recent LLMs, such as ChatGPT or GPT-4, would be better.
We appreciate the feedback and agree that more analysis on more recent LLMs (especially open-source models which are transparent on training and architectures) does improve the empirical analysis. Therefore, we conducted additional simulations of meta-in-context learning on the latest open-source models (MPT-30B, Falcon-40B, Llama-2-7B/13B/70B). We will add the following section to our revised paper:
"**Meta-in-context learning in open-source models**:
To comprehensively explore meta-in-context learning, we extended our evaluation to encompass five distinct open-source models. This expansion allowed us to examine whether this phenomenon is also applicable to less opaque model architectures. Notably, while the majority of models (Falcon-40B \cite{falcon40b} and Llama-2 \cite{touvron2023llama2} models) exhibited no indications of meta-in-context learning, intriguingly, MosaicML's MPT-30B \cite{MosaicML2023Introducing} demonstrated this ability. The results are included in the Supplementary material."
The figure with the corresponding results can be found in the attached PDF under Figure 6B.
We also want to point out to the reviewer that the GPT-4 model also was already included for the real-world regression in the Supplementary materials (see Figure 5).
> The finding is similar to GPT-3 on domain adaptation, such as the work from "Prompting GPT-3 To Be Reliable".
We thank the reviewer for mentioning this paper. We have included a reference to it in our discussion. That being said, we do not think this paper is particularly relevant to our work (besides the fact that both are concerned with in-context learning). While Si et al. propose ways to make in-context learning more reliable using prompt engineering, we show that GPT-3 is capable of adapting its in-context learning abilities to previously encountered related tasks without the need for prompt engineering. In addition, "Prompting GPT-3 to be reliable" focuses on different metrics (4 aspects of reliability), whereas our paper focuses on a different ability (meta-learning).
> Questions:
Can you formulate meta-in-context learning?
We believe that Figure 1 provides a nice formulation of what we view as meta-in-context learning:
“High-level overview of our approach on an example of multiple three-shot regression tasks. We present an LLM with $N$ learning tasks in a row. Improvement within a task indicates that the model is capable of in-context learning. If in-context learning improves across multiple learning tasks, the model is also capable of meta-in-context learning.” | Summary: The authors undertake a study of "meta in-context learning" as a capability of Large Language Models, specifically focused on GPT-3 (with some initial experiments on GPT-4). The authors define meta in-context learning following a task-trial structure, in which the agent observes multiple tasks each consisting of multiple trials. In each trial, a given input-output pair is observed; on the final trial, the model needs to predict the output for an unpaired input. The (unobserved) function generating outputs from inputs differs between tasks. The authors show that GPT-3 is able to learn not only at the trial level (traditional in-context learning) but also across tasks; that is, it can get better both at modelling a specific function and at the general task of modelling functions (in a particular context). The authors further show that this holds across artificial simple regression, reinforcement learning and multiple regression.on real-world data.
Strengths: The paper is well-written, and the results are thoroughly investigated and clearly conveyed. The authors seek to identify the specific causes of the behaviours they document and the extent to which they can be attributed to simpler processes, e.g. learning the output distribution for predicting on the first task of a trial. The paper includes a discussion of context window size-related limitations to the applicability of the results.
On originality, I don't know of any other works that address the question of meta-learning without weight updates. As the authors point out, the results could inspire other researchers and applied LLM users to persue multi-task setups rather than finetuning.
Weaknesses: The paper is well-reasoned within the specific niche of tasks demonstrated, though the overall implications of the work are arguable. Because the examples are constrained to simple numerical problems, it's difficult to tell whether the capability can be usefully leveraged on other types of language tasks. In the discussion, where it's argued that meta in-context could replace finetuning, it would help to give a few examples of real-world tasks that could be reframed as meta in-context.
Within the artificial tasks, it would also be nice to see non-numerical demonstrations of meta in-context learning, both to show that it is possible on other types of tasks and because LLMs tend to struggle with numerical transformations. For instance, within-task could involve translating sentences in a simple made-up language into English (with words that are repeated across trials), while each task involves a different made-up language. This could provide a more convincing argument that meta in-context learning is a general phenomenon beyond math tasks.
More detail on the exploration probit regression model and investigated regression coefficients in the Supplementary would be useful.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Is it possible to create a clear definition of what constitutes an in-context trial and what constitututes an across-context trial, i.e. where does in-context learning become meta in-context learning? Does an intial task trial always start with some degree of irreduceable uncertainty?
See the above questions about real-world examples or non-numerical examples of meta in-context learning.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: No major negative societal implications are expected. The authors address a number of limitations, including low sample size and simplicity of the tasks, in the Discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 3waD,
We thank the reviewer for finding the results thoroughly investigated and clearly conveyed. We also appreciate the comment on originality and belief in its potential impact for applying LLMs to multi-task setups rather than fine-tuning. We also thank the reviewer for their helpful comments and we have made a response for each of their comments along with suggested changes to the paper.
> The paper is well-reasoned within the specific niche of tasks demonstrated, though the overall implications of the work are arguable. Because the examples are constrained to simple numerical problems, it's difficult to tell whether the capability can be usefully leveraged on other types of language tasks. [...]
Within the artificial tasks, it would also be nice to see non-numerical demonstrations of meta in-context learning, both to show that it is possible on other types of tasks and because LLMs tend to struggle with numerical transformations.
We appreciate this suggestion which was echoed by all other reviewers as well. We have therefore conducted additional simulations of meta-in-context learning on the Massive Multitask Language Understanding (MMLU) benchmark. We will add the following section to our revised paper:
"**Meta-in-context learning on natural language processing benchmarks**:
Finally, we examined whether meta-in-context learning also improves upon in-context-learning on standard natural language processing tasks. To test this, we conducted an experiment on the Massive Multitask Language Understanding (MMLU) benchmark \cite{hendrycks2020measuring}.
**Methods**:
We focus on the tasks from the STEM supercategory as other supercategories -- together with the addition of meta-in-context learning -- cause prompt lengths to exceed the limits of GPT-3. For the in-context learning simulations, we provided the model with $k \in {0, 1, 2}$ examples from the same category before prompting it on the test question. For the meta-in-context learning simulations, we additionally prepended three examples of two tasks from different categories.
**Results**:
Figure 9 summarizes our results. We found that meta-in-context learning was in general beneficial in terms of performance. The biggest benefit was observed in the zero-shot case, in which meta-in-context learning reached an accuracy of $55.1$ percent outperforming in-context by $22.4$ percent. This illustrates that LLMs do not necessarily have to be prompted by examples from the same category but that they can also transfer some knowledge from different categories."
The figure with the corresponding results can be found in the attached PDF under Figure 9.
> More detail on the exploration probit regression model and investigated regression coefficients in the Supplementary would be useful.
We agree with the reviewer that more details on this model would be helpful to the reader. We have therefore added a section to the Supplementary Materials called “Details on probit regression analysis” which describes the corresponding analysis in more detail:
"**Details on probit regression analysis**:
We have relied on a probit regression analysis proposed by Gershman \cite{gershman2018deconstructing} to investigate which exploration strategies GPT-3 applies when interacting with two-armed bandit problems. This analysis assumes that the agent makes decisions based on three features: value difference $V_t$, relative uncertainty $RU_t$, and value difference divided by total uncertainty $V_t/TU_t$. Formally, these quantities are defined as follows:
EQUATION
where $\mu_{a, t}$ and $\sigma_{a, t}$ represent the agent’s beliefs about the mean reward and its corresponding uncertainty estimate for a given arm $a$ at trial $t$. We compute these values using via a sequential application of Bayesian inference, assuming normally-distributed priors and likelihoods (updates are only performed for the selected arm):
EQUATION
where $\tau$ corresponds to the additive observation noise.
The resulting features are then entered into a probit regression model whose parameters are fit to agent choices via a maximum likelihood estimation using the statsmodels library \cite{seabold2010statsmodels}. We additionally include an interaction effect with task number $k$ for each feature to investigate how exploration behavior changes across tasks. The final model thus contains six features:
EQUATION
This probit model subsumes several well-known exploration strategies for specific settings of its parameters:
1. Boltzmann-like exploration for $\mathbf{w} = [\mathbf{w}_{1}, 0, 0, 0, 0, 0]$. Note: Boltzmann exploration would use the logit function instead of the probit. However, the two can used to closely approximate each other: $\sigma(a) \simeq \mathbf{\Phi}(\sqrt{\frac{\pi}{8}}a)$.
2. a noisy version of the UCB algorithm for $\mathbf{w} = [\mathbf{w}_1, \mathbf{w}_2, 0, 0, 0, 0]$.
3. Thompson sampling for $\mathbf{w} = [0, 0, 1, 0, 0, 0]$. For an exact derivation, see Gershman \cite{gershman2018deconstructing}."
> Is it possible to create a clear definition of what constitutes an in-context trial and what constitututes an across-context trial, i.e. where does in-context learning become meta in-context learning? Does an intial task trial always start with some degree of irreduceable uncertainty?
Thank you for this comment. We view the relationship between in-context learning and meta-in-context learning as similar to the relationship between Bayesian inference and hierarchical Bayesian inference. They are both based on the same algorithmic principles applied at different conceptual levels. In-context learning and Bayesian inference are used for within-task learning, while meta-in-context learning and hierarchical Bayesian inference are used to pool information across tasks.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: Thank you to the authors for the response, updates and undertaking the new analyses, particularly the MMLU analysis, which make me lean more confidently towards acceptance. I will review and consider raising my score.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for dedicating their time to our response and for their kind words. If there are any remaining inquiries, we are available for further discussion. | Summary: The paper explores a phenomenon referred to as “meta in-context learning” in large language models, where the in-context learning abilities of these language models can be recursively enhanced through in-context learning itself. To demonstrate this, the researchers examine two idealized domains: a one-dimensional regression task and a two-armed bandit task. They show that meta-in-context learning (1) improves in-context learning performance (2) dynamically shapes the large language model's priors over expected tasks and (3) also modifies its in-context learning strategies.
Strengths: * The high level question raised in this paper is interesting – can in-context learning be improved via the recursive operation of concatenating several different tasks in-context?
* The experimental setup is clean and the obtained experimental results convey a clear message. They clearly show that for the examined problems meta in-context learning helps, and moreover they reveal that the improvement (also) comes from previous tasks in context shaping the LLM prior for the examined task.
Weaknesses: * I found the setup and particularly the examined tasks to be too limited. Although the high level question is interesting I feel like much more could be done in order to shed light on it and connect it to actual scenarios in which in-context learning is used in practice.
The authors chose to focus on linear regression (synthetic and 60 real world tasks filtered down to 42) / synthetic bandit problems. The abstract and intro describe that in-context learning is a very important feature of LLMs which facilitated their success; while true it is not related to these types of tasks but rather to natural language ones. I think that the interesting high level question raised in this paper could easily have been studied for natural language tasks, and in particular relating it to papers showing multitask benefits which study which tasks are related and reinforce each other such as Sanh et al ICLR 2022, Aribandi et al, ICLR 2022 and others.
I acknowledge that there is a growing literature of works theoretically analyzing and experimenting with in-context learning on synthetic / mathematical problems, the authors mention some of it. However, in this case I think that the merits of the synthetic / simple / mathematic setup were not fully utilized – for example, the authors could have studied in this simple setting how the relatedness of different tasks in the same context affects their observed signal (eg by defining a formal metric of distance / similarity between linear regression tasks). A small subset of the results in section 3.3 is dedicated to a statement about task similarity and I think it should be expanded for deeper insight. In the lack of such deeper investigations in the synthetic realm, and on the other hand no experimentation on language tasks, this paper misses out on real insight relevant for language related scenarios. For the scenarios studied in the paper, LLMs are not the go-to tool. I will note as an outlier to the above that the study of in-context learning strategies in the bandit case is deep, novel, and leverages the clean mathematical structure of the synthetic experiment. However, even in this case, I see no conclusions applicable to real world scenarios in which LLM in-context learning is the go-to tool.
* The paper is not segmented into “method”, “experimental setup”, “results” sections, but rather includes one section describing all of the above linearly according to the different tasks that were ran. This is not a problem a-priori, but I found the presentation hard to follow at times, and the authors may want to consider structuring the paper a bit more.
* Minor remarks:
** There are some some typos (eg, end of parentheses in line 77, named citations without a year)
** The name “meta in-context learning” is used for a different method presented by Min et al. 2021 in this influential paper: “MetaICL: Learning to Learn In Context”. You may want to rename this paper.
** I think that the authors overstate when they claim meta in-context learning to be related to psychological experiments.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What are the main practical consequences or implications of your findings for specific real world scenarios in which in-context learning in LLMs is currently used?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer xRJY,
First, we thank the reviewer for finding the paper interesting with a clear message conveyed from the experiments. We also thank the reviewer for their helpful comments and we have made a response for each of their comments along with suggested changes to the paper:
> I found the setup and particularly the examined tasks to be too limited. [...] I think that the interesting high level question raised in this paper could easily have been studied for natural language tasks.
We appreciate this suggestion which was echoed by all other reviewers as well. We have therefore conducted additional simulations of meta-in-context learning on the Massive Multitask Language Understanding (MMLU) benchmark. We will add the following section to our revised paper:
"**Meta-in-context learning on natural language processing benchmarks**:
Finally, we examined whether meta-in-context learning also improves upon in-context-learning on standard natural language processing tasks. To test this, we conducted an experiment on the Massive Multitask Language Understanding (MMLU) benchmark \cite{hendrycks2020measuring}.
**Methods**:
We focus on the tasks from the STEM supercategory as other supercategories -- together with the addition of meta-in-context learning -- cause prompt lengths to exceed the limits of GPT-3. For the in-context learning simulations, we provided the model with $k \in {0, 1, 2}$ examples from the same category before prompting it on the test question. For the meta-in-context learning simulations, we additionally prepended three examples of two tasks from \emph{different} categories.
**Results**:
Figure 9 summarizes our results. We found that meta-in-context learning was in general beneficial in terms of performance. The biggest benefit was observed in the zero-shot case, in which meta-in-context learning reached an accuracy of $55.1$ percent outperforming in-context by $22.4$ percent. This illustrates that LLMs do not necessarily have to be prompted by examples from the same category but that they can also transfer some knowledge from different categories."
The figure with the corresponding results can be found in the attached PDF under Figure 9.
> The authors could have studied in this simple setting how the relatedness of different tasks in the same context affects their observed signal (eg by defining a formal metric of distance /similarity between linear regression tasks).
We agree that beyond merely showcasing performance gains across tasks, our paper would benefit from delving into the underlying reasons for the improved performance of LLMs across tasks. In response, we have expanded our analysis by incorporating task similarity considerations, extending our approach initially adopted for Experiment 3 to both Experiment 1 and Experiment 2. We added task similarity as a regressor on Experiment 1’s and 2’s respective MSE/regret regression bar plots (Figure 2C and 3C).
For Experiment 1, we quantified task similarity using the average negative L2 norm of the underlying parameters (slope & intercept) with previous tasks. For Experiment 2, we quantified task similarity using the average difference of mean rewards with previous tasks. Our analysis shows a strong effect of Task similarity for each Experiment.
For a comprehensive visualization of these refined insights, we refer readers to the attached PDF which also includes the updated barplots in Figure 8.
> I see no conclusions applicable to real world scenarios in which LLM in-context learning is the go-to tool.
We thank the reviewer for raising the point regarding conclusions applicable to real-world scenarios. We agree that adding more real-world scenarios would further strengthen our paper. To address this point, we have added experiments on a natural language benchmark (MMLU) as described in more detail in our earlier response above.
> The paper is not segmented into “method”, “experimental setup”, “results” sections.
We thank the reviewer for this feedback, which we have used to restructure the sections of the paper. In particular, we have moved each experimental subsection one level higher (e.g., “3.1 Learning one-dimensional functions” is now “4 Learning one-dimensional functions”). Furthermore, each of these sections is now divided into a methods and results subsection. Finally, we have renamed “3 Experimental analyses” to “3 Experimental setup”. We hope that these changes make it easier for readers to follow our presentation.
> Minor remarks:
** There are some some typos
** The name “meta in-context learning” is used for a different method presented by Min et al. 2021 in this influential paper: “MetaICL: Learning to Learn In Context”. You may want to rename this paper.
** I think that the authors overstate when they claim meta in-context learning to be related to psychological experiments.
We addressed the typos and agree that the link to psychological experiments is not clear so we removed the paragraph from our paper.
In addition, we want to thank the reviewer for mentioning the MetaICL paper. We have added it to the related work section, where we also describe how it differs from our approach:
"Meta-in-context learning versus classical meta-learning schemes:
[...] **Min et al. \cite{min2021metaicl} proposed a method called \emph{meta-training for in-context learning} following the same paradigm. Their work starts with a pretrained language model that is then trained (via an adjustment of its weights) on a large set of training tasks to do in-context learning, ultimately leading to a model with improved in-context learning capabilities.** In contrast to these approaches, our approach adapts a model to a series of learning problems entirely through the context itself as opposed to updating weights. "
Note that MetaICL stands for “meta-training for in-context learning” and not for “meta-in-context learning”, so we do not think that an entire renaming of our terminology is necessary.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thorough rebuttal. I looked also at the graphs in the supplementary PDF. The zero-shot meta-ICL results on MMLU are encouraging. I am raising my score and I highly recommend expanding on these experiments if the paper is published at NeurIPS or when submitting the next venue.
---
Reply to Comment 1.1.1:
Comment: We extend our gratitude to the reviewer for investing their time in reviewing our response and for their generous remarks. We acknowledge the feedback and, as a response, we are broadening our experimentation on the MMLU benchmark. This expansion involves incorporating various open-source models, and the outcomes of these experiments will be included in the Supplementary material. If there are any remaining inquiries, we are available for further discussion. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their valuable and thoughtful feedback.
* Reviewer ShPz found “the paper an enjoyable read” and stated that “the [our] research is relatively well motivated, the analysis are carefully detailed, and the method is clearly explained.”
* Reviewer xRJY said that the “experimental setup is clean and the obtained experimental results convey a clear message.”
* Reviewer 3waD said that our paper “is well-written, and the results are thoroughly investigated and clearly conveyed.”
* Reviewer 6CRk called the paper “clearly written and easy to follow.”
Furthermore, the paper scored an average of 3.0 on both soundness and presentation.
However, all reviewers also raised important points and provided helpful suggestions. We were able to incorporate all of these suggestions and believe that doing so has improved our paper significantly. To summarize, we have made the following modifications:
* We tested meta-in-context learning on a natural language processing benchmark (MMLU), where we found good performance (requested by all reviewers).
* We have included experiments with **eight** additional models to investigate:
1. whether meta-in-context learning is an emergent phenomenon that arises at scale (reviewer ShPz).
2. whether meta-in-context learning can also be found in open-source models (reviewer ShPz).
* We ran additional experiments with non-linear (quadratic functions), where we also found meta-in-context learning to be beneficial (reviewer 6CRk).
* We performed additional analyses to investigate how task similarity affects meta-in-context learning performance (reviewers xRJY).
* We incorporated references suggested by the reviewers (reviewers NP3F, xRJY, ShPz).
* We added more details on the probit regression analysis from the two-armed bandit task (reviewers 3waD).
* We fixed reference issues, typos and improved the structure slightly.
We describe these changes in detail in our responses to the individual reviews below. We again want to thank the reviewers for their time and for actively taking part in the review process.
Pdf: /pdf/f7a79e55f8deea09e78d43ea9ce08de1bc7708c9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper demonstrates that large language models (LLMs) are capable of meta-in-context learning: updating their in-context-learning abilities when prompted with examples of several tasks
The authors empirically show this capability on several learning paradigms, including supervised learning (1D linear regression), reinforcement learning (2-armed bandits), and several real-world linear regression problems.
These experiments show that LLMs, and GPT-3/GPT-4 more particularly, can effectively adapt their in-context learning algorithm while also matching simple baselines like Bayesian linear regression (BLR), upper-confidence bound (UCB), and random forests.
They also show that meta-in-context can outperform standard in-context learning, which is especially relevant given the popularity of this technique with LLMs.
Strengths: 1. I found the paper an enjoyable read. The research is relatively well motivated, the analysis are carefully detailed, and the method is clearly explained. In fact, I believe I could easily replicate the results from the paper because all experimental testbeds are precisely described and the exact prompts are provided (which also help illustrate the method; see blue panels). While more baselines could be included for completeness, the experiments clearly show the meta-in-context learning effect so I don't think they are necessary.
2. Despite its simplicity, the presented idea is elegant and I could see it impacting research beyond large language models. For exampe it is not difficult to imagine extensions beyond text inputs (say, vision, audio, or robotics), or theory extensions (in the flavor of "Transformers learn in-context by gradient descent"), or even applying a similar strategy beyond in-context learning (eg, for prompt or prefix tuning). As such, I think it'll garner interest from the NeurIPS community.
Weaknesses: 1. Experimental design: given that there is no theory, I find the experimental section a bit light. First, the authors only include results with OpenAI models (GPT-3 and GPT-4) which are not open-source. So it's not clear if meta-in-context learning works as well (or at all?) with publicly available LLMs. Second, can *any* LLM meta-learn in-context? How does the meta-learning ability improve with model/data size? Third, meta-in-context is only tested on small scale and toy datasets -- even the "real-world" ones are only require simple linear regression models. It'd be much more compelling if the authors could show that meta-in-context learning also improves upon in-context-learning on standard NLP tasks. NLI-type tasks would be a promising start but I'd find question answering (eg, SQuAD) or even GSM8K much more interesting.
2. Novelty: The proposed method is similar to "MetaICL: Learning to Learn In Context" by Min et al. and published at NAACL in 2022 -- yet, the authors don't even mention this work. As far as I understand, MetaICL pretrains the model such that the LLM weights are trained to adapt quickly to in-context prompts (hence they have learned to learn), while this work demonstrates that meta-learning emerges even with the standard LLM training objective. It'd be insightful to compare the two approach to quantify how much meta-training helps to adapt quickly. Min et al.'s work is especially relevant because it demonstrates benefits on real NLP tasks.
3. Scholarship: Many references in the bibliography are broken. This might seem a detail but not when a third of the references don't even state a venue.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Do open-source models also exhibit meta-in-context learning? Why or why not?
- How does meta-in-context learning depend on the size of the model / data it is trained on?
- Can meta-in-context learning outperform in-context learning on standard NLP tasks like NLI, SQuAD, or GSM8K?
- How is this work different from the Min et al.'s MetaICL paper?
- Please fix your bibliography.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations of the work are appropriately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer ShPz,
We appreciate that the reviewer found our paper well-motivated, clear enough to be replicated easily and the idea elegant and possibly impactful. We also thank the reviewer for their helpful comments and we have made a response to each of their comments along with suggested changes to the paper.
> First, the authors only include results with OpenAI models (GPT-3 and GPT-4) which are not open-source. So it's not clear if meta-in-context learning works as well (or at all?) with publicly available LLMs
We appreciate the feedback and agree that some analysis on other LLMs (especially open-source models which are transparent on training and architectures) does improve the empirical analysis. Therefore, we conducted additional simulations of meta-in-context learning on the latest open-source models (MPT-30b, Falcon-40b, Llama-2-7b/13b/70b). We will add the following paragraph to our revised paper:
"**Meta-in-context learning in open-source models**: To comprehensively explore meta-in-context learning, we extended our evaluation to encompass five distinct open-source models. This expansion allowed us to examine whether this phenomenon is also applicable to less opaque model architectures. Notably, while the majority of models (Falcon-40B \cite{falcon40b} and Llama-2 \cite{touvron2023llama2} models) exhibited no indications of meta-in-context learning, intriguingly, MosaicML's MPT-30B \cite{MosaicML2023Introducing} demonstrated this ability. The results are included in the Supplementary material."
The figure with the corresponding results can be found in the attached PDF under Figure 6B.
> How does the meta-learning ability improve with model/data size?
We thank the reviewer for the feedback as we concur with the significance of investigating whether meta-in-context learning emerges as a function of model or data size. In response, we will incorporate the subsequent section into our revised paper:
"**Meta-in-context learning is an emergent phenomenon**:
In order to gain a deeper understanding of the phenomenon's characteristics, we pursued an examination into the progression of meta-in-context learning proficiency in relation to both model complexity and dataset size. To this end, we undertook an analysis encompassing smaller GPT-3 models, specifically text-ada-001, text-babbage-001, and text-curie-001 \cite{openaiAPI}. This analysis (which can be found in the Supplementary material) revealed a noteworthy trend, wherein solely text-davinci-002 seems to exhibit meta-in-context learning capabilities, thereby making it an emergent phenomenon."
The figure with the corresponding results can be found in the attached PDF under Figure 6A.
> It'd be much more compelling if the authors could show that meta-in-context learning also improves upon in-context-learning on standard NLP tasks.
We appreciate this suggestion which was echoed by all other reviewers as well. We have therefore conducted additional simulations of meta-in-context learning on the Massive Multitask Language Understanding (MMLU) benchmark. We will add the following section to our revised paper:
"**Meta-in-context learning on natural language processing benchmarks**:
Finally, we examined whether meta-in-context learning also improves upon in-context-learning on standard natural language processing tasks. To test this, we conducted an experiment on the Massive Multitask Language Understanding (MMLU) benchmark \cite{hendrycks2020measuring}.
**Methods**:
We focus on the tasks from the STEM supercategory as other supercategories -- together with the addition of meta-in-context learning -- cause prompt lengths to exceed the limits of GPT-3. For the in-context learning simulations, we provided the model with $k \in {0, 1, 2}$ examples from the same category before prompting it on the test question. For the meta-in-context learning simulations, we additionally prepended three examples of two tasks from different categories.
**Results**:
Figure 9 summarizes our results. We found that meta-in-context learning was in general beneficial in terms of performance. The biggest benefit was observed in the zero-shot case, in which meta-in-context learning reached an accuracy of $55.1$ percent outperforming in-context by $22.4$ percent. This illustrates that LLMs do not necessarily have to be prompted by examples from the same category but that they can also transfer some knowledge from different categories."
The figure with the corresponding results can be found in the attached PDF under Figure 9.
> 2. Novelty: The proposed method is similar to "MetaICL: Learning to Learn In Context" by Min et al. and published at NAACL in 2022 -- yet, the authors don't even mention this work.
Thanks a lot for mentioning this paper – we had somehow missed it. We have added it to the related work section, where we also describe how it differs from our approach:
"Meta-in-context versus classical meta-learning schemes:
[...] **Min et al. \cite{min2021metaicl} proposed a method called \emph{meta-training for in-context learning} following the same paradigm. Their work starts with a pretrained language model that is then trained (via an adjustment of its weights) on a large set of training tasks to do in-context learning, ultimately leading to a model with improved in-context learning capabilities.** In contrast to these approaches, our approach adapts a model to a series of learning problems entirely through the context itself as opposed to updating weights."
> 3. Scholarship: Many references in the bibliography are broken. This might seem a detail but not when a third of the references don't even state a venue.
Thanks a lot for pointing out this mistake. We strive to take good scholarship seriously and therefore went through all references to fix cases that had missing venues or other incomplete information.
---
Rebuttal Comment 1.1:
Title: Thanks for the extensive updates
Comment: Thank you for the many updates. Here are a few more thoughts and some smaller concerns:
* The open-source and smaller-scale experiments look surprising — **thanks!** It's especially interesting to me that Meta-ICL works for MPT-30B but doesn't for the other open-source models, even when they are stronger (eg, LLaMa 2). Interestingly, it's also the model that gets the lowest MSE. Do you have a hypothesis for why that's the case? Could it be that the linear function learning benchmark isn't well suited for these models that are heavily text-optimized?
* On the MMLU tasks: it looks like ICL is overtaking MetaICL already at 2 few-shot exemplars. This looks like a negative result for MetaICL on text tasks, which is still interesting. Were you able to tune the number of in-context tasks for MetaICL or was the context length a limiting factor? What accuracy does ICL reach if we allow it to use a similar context length as MetaICL? In other words, can we use MetaICL to make up for a lack of labelled exemplars in one task by leveraging other related tasks?
* Regarding Min et al., 2022: thanks for mentioning it. Given that it's an older work, I think this submission could be further strengthened by showing if Min et al.'s pretraining objective is necessary to get the best meta-icl performance. For example, can the open-source models learn to meta-icl if pretrained / finetuned with this objective?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the feedback and tried to answer the best we can their questions due to the limit on time:
- We believe that every LLM behaves differently in different benchmarks and so it is difficult to speculate about one being better in performance for a given task. Nonetheless, our meta-in-context learning experiments require very long context sizes. Therefore, we hypothesize the following which we will add to the **Meta-in-context learning in open-source models** section of our paper:
*" We believe that models trained in a regime with a longer context window are more suitable to show the emergence of this phenomenon. Indeed, mpt-30b has a context-size of 8000 tokens as opposed to the LLaMa-2 models which have been trained with a context window of 4096 tokens. This is speculative and we leave the analysis of what factors influence the emergence of meta-in-context learning as a future research question."*
- The price induced by the context length was the main constraint and therefore we only ran the benchmark using 3 examples per task. The similar context length condition is an interesting suggestion, and we will run the analysis for the camera-ready paper and include it in the Supplementary Material.
- Finally, we agree that this line of work would be of great interest for future work but we believe it is out of scope for this project. Therefore we added the following to our **Discussion**:
*"A promising direction for further research involves delving into the factors that contribute to the emergence of the meta-in-context learning phenomenon. One potential factor, as mentioned earlier, is the length of the context window. Another avenue to explore is to assess whether specialized models fine-tuned for in-context learning, like the framework proposed by Min et al. \cite{min2021metaicl} (Meta-training for In-Context Learning), yield optimal performance in the context of meta-in-context learning."* | null | null | null | null | null | null |
The noise level in linear regression with dependent data | Accept (poster) | Summary: The focal point of this paper is very precise, namely least-squares linear regression under data which need not be independent. Regardless of linearity, when the model is properly specified (realizability, i.e., the expected squared error minimizer is included in the model), martingale-based arguments are well-established in the literature. It is this realizability assumption that the authors remove. They conduct a blocking-based argument; this breaks up the data into blocks which are essentially independent, but takes a hit because the effective sample size is reduced. Their blocking-based argument is done in a careful way, such that the resulting "noise level" factor (a variance-like quantity) that appears in excess risk bounds is not excessively inflated by this reduction.
Strengths: The paper is extremely well-written, with notation and exposition all crystal-clear. The main content is all quite technical in nature, but the key ideas are explained in an intuitive fashion, with appropriate references to the relevant literature upon which this work stands. The paper is centered around the main technical result (Thm 3.1), and is organized to give relevant background to understand the technical context in which this result stands, and to describe the essential points of their proof. In my opinion, the balance between informal and formal content is excellent.
The main result of this work is Theorem 3.1. While there are several technical "burn-in" conditions (the sample cannot be too small, relative to the degree of dependence and noise level, etc.), the core result (3.2) is clear, appealing, and to the best of my knowledge fills a valid gap in the theoretical literature for linear least-squares with dependent data.
Weaknesses: Obviously this is a technical paper with a very particular problem of interest, and thus the overall "impact" on the field of machine learning is limited, but the solution provided by the authors to this problem is presented clearly, and the main claims are to the best of my understanding solid.
The only point I had trouble with in terms of clarity was the notion of "instance-specific" and "instance-optimal" performance guarantees. I know the authors try to spell this out in the first paragraphs of 3.1, but if space allows, I think a more explicit explanation of the "global" complexity in previous works would make the "local" complexity here a lot more clear.
A couple other small points:
- Since there is some space left, I felt that it would be nice to give $\\widehat{M}$ a definition analogous to $M\_{\\star}$ in (2.1), instead of just giving the form in (2.4) and saying it is the OLS solution. I know this is simple to verify, but if space allows it, such an explicit expression makes it more friendly to a crowd familiar with notions of ERM.
- The last sentence of paragraph 2 in section 1 is repeated.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I do not have any points to confirm that would change my opinion of the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The technical assumptions are all given explicitly, with plentiful references to the existing literature, so I feel the limitations of this work are quite clear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort spent on evaluating our manuscript.
* We have clarified the distinction between local and global complexities by adding the following sentence to Section 3.1: "In other words, they compete against the worst distribution at a given level of mixing, whereas we compete against a fixed distribution."
* We have also added a short remark that $\widehat M$ is an empirical risk minimizer at the end of the problem formulation as per your request.
---
Rebuttal 2:
Title: Re: Rebuttal by Authors
Comment: I thank the authors for their response. Having read the other reviews, my overall opinion remains the same. | Summary: This paper deals with linear regression with dependent ($\beta$-mixing) data. It provides an upper bound of the OLS error in terms of the sample size and the effective dimension of the covariate matrix.
Strengths: This paper studies the linear regression with dependent data. The main idea is to decompose the data into blocks, and apply the concentration inequalities. The authors also provide bounds on the burn-in period. The overall results are new, though the idea does not seem to be novel. The presentation of the paper is clear, and I mostly enjoyed reading the paper.
Weaknesses: The main weakness of this paper is that the approach of combining blocking with concentration is quite traditional. As pointed out by the authors themselves, this idea was carried out in [10] (almost 30 years ago), with subsequent development by Massart, Wu...etc. The main result (Theorem 3.1) is more or less expected in the high dimensional statistics. Moreover, there is no (empirical) experiment for illustration.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have no specific question, but one general comment. The result established in this paper has close affinity to nonparametric estimation (with possibly non i.i.d. noise, e.g. time series). This is a vast field, and the authors may want to add more discussions on this connection, see e.g. Wu's paper Nonlinear system theory: another look at dependence. PNAS, 2005.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort reviewing our submission.
By our estimation, the reviewer's main concern is a lack of novelty, in the sense that similar estimates on the random walk component of our analysis (i.e. the "numerator" in the estimation error) have appeared previously in the large/moderate deviations literature. We do not argue this fact but would like to some nuance to claim that the result is "expected" or lacks novelty and kindly ask the reviewer to reconsider their score in light of this if possible.
First, one of the main points of our manuscript is precisely that this is expected (from the central limit theorem) but that, nevertheless, sharp estimates for learning with dependent data are conspicuously absent in the literature (beyond certain special cases as we note). Indeed, a quick glance at our references (and more can probably be found) reveals that several top tier venues including NeurIPS, the Annals of Statistics, and COLT have published results on square loss quite recently in which the noise term is deflated by such mixing time dependencies. Moreover, just as we do here, some of these papers focus entirely on linear classes (e.g. Wong et al. (Ann.Stat. 2020) and Nagaraj et al. (NeurIPS 2020)). Hence, even if these papers combine blocking with concentration, they do not arrive at rates that match known asymptotics---again: remedying this is precisely the point of the present manuscript.
Arguably, we then have that 1) the problem is of interest to the intended audience; 2) to the best of our knowledge, there are no previous works that manage to obtain sharp non-asymptotics for any convex class with general dependent data---even though they as we do combine blocking and concentration, they only manage to obtain multiplicative instead of additive dependency on mixing; and 3) miss-specified linear regression is a natural starting point for sharpening this dependency.
With this in mind then, it appears a little unfair to us to reject our solution based on the fact that it is "simple" and draws on existing ideas from probability theory.
---
Rebuttal Comment 1.1:
Comment: Many thanks for the detailed explanations. The score remains unchanged. | Summary: The paper explores the impact of noise level in linear regression for dependent data by blocking technique, which can accommodate a broad type of dependent structures. Theoretical justification of the non-asymptotic guarantee and excess risk bound are provided, imposing any realizable assumptions on the noise. The paper's insights and conclusions are likely to influence and inspire further research in the field.
Strengths: The authors propose a novel perspective by combining $\beta$-mixing assumption and blocking technique, leading to a new approach for addressing the dependent data. This innovative combination offers unique insights and contributes to advancing the field in handling such data structures. This paper demonstrates a solid theoretical foundation, characterized by sound reasoning and logical coherence. The non-asymptotic results are precisely analyzed, and potential limitations are appropriately addressed.
Weaknesses: 1. Inadequate comparison with prior work: The paper could benefit from a more thorough comparison with existing literature. Providing a detailed analysis and critique of related works would help situate the proposed approach in the context of prior research. Identifying the limitations of previous methods and explicitly explaining how the proposed approach addresses those limitations would strengthen the argument for the paper's contribution.
2. Lack of clarity in methodology: The paper could benefit from providing more detailed and explicit explanations of the research introduction and methodology. Some sections may be unclear or lack sufficient technical details, making it difficult for readers to understand the implementation nuances. Adding supplementary information, such as mathematical formulations, pseudo-code, or algorithmic details, would greatly improve the clarity of the work.
3. Lack of experimental validation: There conducted no numerical results in the manuscript, which are limited in scope and fail to provide a comprehensive evaluation of the proposed approach. To strengthen the research, the authors should consider adding some numerical experiments to demonstrate the effectiveness and generalization of their method compared to existing methods. Additionally, providing a comparative analysis with existing methods would further highlight the strengths and weaknesses of the proposed method.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: There are some issues need to be addressed.
1. The martingale technique is commonly-used to address the dependence in random stochastic process. Could the authors highlight the differences between the proposed methods and martingale? Further, it would be valuable for the authors to give some discussion about the realizable and non-realizable learning under dependent setting.
2. In the problem formulation, does the last equality hold in the definition of excess risk in Equation (2.3)?
3. The authors claimed that the sidestep of blocking can be decomposed as controlling the lower tail of the empirical covariance matrix and the interaction between the noise and covariates, and thus adopt $\beta$-mixing assumption to address the interaction term for non-asymptotic bound. However, I wonder what is the main contribution of the proposed method except for the use of $\beta$-mixing and blocking, since the two instruments are very common to tackle dependent data or time series. Maybe the Summary section need to be rewritten.
4. In Theorem 3.1, it would be better if the detail of correcting the blocking inflation factor is exposed.
5. The authors mentioned that the non-asymptotic result allows for arbitrary dimensions and unbounded process, but there is no further discussion or elaboration on this aspect.
6. The authors have mentioned that the proposed method outperform the some existing work. It would be beneficial if the authors could conduct some numerical experiments to support the proved theoretical results compared to some existing works, for example, Nagaraj et al. (2020).
7. The manuscript is not well-organized and there are many typos in the manuscript and the writing needs more polishing. For example, in Section 1 of Page 1 Paragraph 2, the last sentence appeared twice.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: No other major concerns than the ones listed in the weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We do not agree with the assessment provided by reviewer 8guK and believe it should be disregarded. There is no real criticism of our work in this review other than sweeping and unsubstantiated (and sometimes contradictory) claims. The main points of contention appear to be 1) related work, 2) clarity, and 3) experiments. We address these below.
1. While we do not claim to be exhaustive, we believe we have provided a fair overview of closely related contributions. In particular, we have provided a detailed comparison to the work of Nagaraj et al. in Section 3.1, which to the best of our knowledge is most closely related.
We also note that the other reviewers appear to be in agreement with us on this point.
2. The reviewer believes that our work "lacks clarity in methodology". We find this criticism to be rather sweeping and without proper justification. We also note that the other reviewers appear to find the paper rather clear.
3. While experiments generally do not hurt we do not think they would do much to improve this particular paper, and in particular we do not agree with the way this criticism is written by the reviewer. Our result is a refined analysis of a very standard algorithm and in particular we do not provide any new algorithm and so providing a "comparative analysis with existing methods" does not make much sense.
We also note that the review is somewhat incoherent, examples including for instance that the paper is likely to "inspire further research in the field" while being awarded a contribution score of "1".
We respond below to the reviewer's questions below:
1. As is stated in the introduction, the differences lie in the set of assumptions. This in turn necessitates a new proof approach as is provided here.
2. Yes, this is a well-known identity for the population risk in the regression literature and is easily obtained invoking optimality of $M_\star$ to the population risk criterion.
3. As is clearly stated at several places throughout the paper, the main contribution is that we are first to provide a sharp non-asymptotic analysis of least squares for a wide range of processess ($\beta$-mixing). Previous results operate either under strict realizability assumptions or deflate the rate by a "mixing time factor". With regard to the use of blocking and mixing the novelty lies in the way these are combined.
4. We could not follow the meaning of this question. Could the reviewer clarify the meaning of exposing the blocking inflation factor? We have clearly stated our burn-in conditions.
5. As the reviewer notes we state in the introduction that the one dimensional "Bernstein sketch" can be extended to arbitrary dimensions and unbounded processes. This is precisely the statement of Theorem 3.1 which holds for covariates taking values in any finite dimensional space and does not impose boundedness assumptions on these covariates other than the existence of 4 moments. We are left wondering what the reviewer means?
6. We believe the rather detailed comparison in Section 3.1 makes this point sufficiently clear.
7. We thank the reviewer for point out the accidental doubling of this sentence. We do however strongly disagree with the suggestion that the manuscript is not well-organized and it appears strange to us in light of how highly the other reviewers rate our presentation.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I've gone through the authors' rebuttal, but my score will remain the same. | Summary: This paper studies the risk bounds of OLS for linear regression with dependent data. In particular, the label noise is allowed to be non-martingale. It shows that, after a burn-in phase, OLS with dependent data archives a bound of the same order as if the data is iid, provided that the failure probability is moderately small. The proved bound is particularly interesting because it suggests that the leading error does not explicitly depend on the mixing time.
Strengths: + Excellent presentation.
+ Sharp risk bounds are well-developed for OLS for linear regression with iid data. However, when the data is dependent, the risk achieved by OLS is less clear. Surprisingly, this work proves that, even when the data is dependent, the risk of OLS still recovers that predicted by the central limit theorem and does not rely on the mixing time, given that the sample size exceeds a burn-in requirement and that the failure probability is moderately small.
+ The proof is decomposed into two neat parts, the first part controls a dependent random walk with the blocking technique and the other part controls the lower tail of the empirical covariance. The proof demonstrates the new ingredients of this work that allows obtaining the improved bound.
+ Prior works are well discussed. I especially appreciate the comparison with [23], which clarifies that, in the worst case, the bound will depend on the mixing time, but in average cases, the bound does not explicitly depend on the mixing time.
Weaknesses: Please see the questions below.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. My main question is regarding line 158, the "hypercontractivity condition". To my knowledge, in the iid case, the hypercontractility condition states that
$$ E \langle v, x \rangle^4 \le h^2 \cdot \langle v, E [xx^\top] v \rangle^2. $$
In comparison, the "hypercontractivity condition" introduced in line 158 misses a square on the right-hand side. Could you please comment on this difference? I understand that the "hypercontractivity condition" introduced in line 158 only requires $v$ such that
$$ \langle v, \Sigma v\rangle = 1.$$
Will the "hypercontractivity condition" in line 158 implies an additional condition on the data dependence?
2. Line 182. ```The first burn-in condition of (3.3) is standard for control of the lower tail... it is optimal even in the iid regime.```
I am not sure if this statement is accurate. For example, the work by [BLLT] (and its many follow-ups) allows $n$ to be smaller than $d_{X}$. Granted, they do need stronger assumptions such as data being iid and sub-Gaussian.
3. One drawback of the bound is that the fail probability $\delta$ cannot be made arbitrarily small. The condition (3.5) implies the mixing time has to be sufficiently fast. This is also acknowledged in lines 192-197.
4. Line 169. ```The scaling with ... thus scales as expected in the iid regime, but also degrades gracefully with dependence. We reiterate that (3.2) does not depend directly on mixing in the leading order term....```.
I feel the discussion is a little confusing. Note that even if the bound has a multiplicative dependence on the mixing time, it still degrades "gracefully" with dependence. Because in the iid case, the mixing time is just $1$ so a multiplicative dependence does not harm either.
## Typos
1. Line 52. Are there some typos in the definitions of $\bar V_i$? I guess the right definition be
$$ \bar V_i = \sum_{j = (i-1)k + 1}^{ik} V_j$$
2. Line 128. "empirical excess risk" -> "excess risk"
3. Should mention that eq (3.2) holds with probability $> 1-\delta$.
[BLLT] Bartlett, Peter L., Philip M. Long, Gábor Lugosi, and Alexander Tsigler. "Benign overfitting in linear regression." Proceedings of the National Academy of Sciences 117, no. 48 (2020): 30063-30070.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort spent on reviewing our manuscript. We provide brief answers to their questions 1,2 and 4 below (with 3 being more of a statement).
* Answers to Questions.
1. (hypercontractivity/moment equivalence). For sake of argument, let us assume that the process in question is stationary. The condition we use is then equivalent to $L^4/L^2$ equivalence on the $\Sigma_X$-sphere. Namely for fixed $v\in \partial\Sigma_X$, the requirement that $\mathbb{E} \langle v,X\rangle^4 \leq \langle v, \mathbb{E} XX^\top v\rangle^2$ is exactly the same as the requirement that $\mathbb{E} \langle v,X\rangle^4 \leq \langle v, \mathbb{E} XX^\top v\rangle$, precisely because $\langle v, \mathbb{E} XX^\top v\rangle=1$ since $\partial\Sigma_X = \{ v : \langle v, \mathbb{E} XX^\top v\rangle=1 \}$ by definition. Put differently, these conditions only differ significantly if one requires them to hold outside the $L^2_{P_X}$-unit-sphere but since we are working with a linear class we only need to consider the unit ball (by a rescaling argument). In general (without stationarity) these conditions will not differ by more than a factor of a condition number between $\Sigma_X$ and $\Sigma_{X_i}$.
2. This is a good point, thank you. We should have phrased this more carefully so that it refers to invertibility of the empirical covariance matrix---changes to reflect this have been made.
3. This is correct---these burn-ins are obtained by requiring that various additive terms are "sufficiently small". We could in principle present a bound valid without these mixing-related burn-ins but there is no guarantee that our results are sharper than existing bounds for $\delta$ in this regime.
4. We will clarify this point, thank you.
Typos: We thank the reviewer for pointing these out to us.
---
Rebuttal Comment 1.1:
Title: Regarding hypercontractivity
Comment: I understand that your condition is only stated for vectors in a unit ellipse so it is equivalent to the conventional version in the stationary case. My concern was that
```
Will the "hypercontractivity condition" in line 158 implies an additional condition on the data dependence?
```
In the response, you have suggested that
"In general (without stationarity) these conditions will not differ by more than a factor of a condition number between $\Sigma_X$ and $\Sigma_{X_i}$"
I believe that missing a condition number between $\Sigma_X$ and $\Sigma_{X_i}$ is a significant caveat and should have been clarified in the paper. Also, it would be helpful to give examples to demonstrate when this condition number is small and how this requirement affects the data dependence.
---
Reply to Comment 1.1.1:
Comment: Many thanks for taking the time to clarify your question.
First, just to be clear, let us state that any condition number that does appear will be relegated into the factor $\mathsf{h}$ which only appears in the burn-in. It does not affect the final rate in the leading order term.
Now, you are correct in stating that the some degree of well-conditioning is necessary. Although we believe this has more to do with us requiring recovery of $M_\star$ in $\Sigma_X$-norm (as opposed to some weaker 2-norm given by $\Sigma' \prec \Sigma_X$) than dependence directly.
With regards to the effect on imposing dependence structure, it is instructive to first consider what happens for $\mathsf{X}$ a compact state space. In this case, $X_i X_i^\top$ is uniformly bounded and the condition we impose holds trivially with $\mathsf{h}$ exhibiting dependence on the diameter of $\mathsf{X}$ and the smallest eigenvalue of $\Sigma_X$. Hence, at least for bounded covariates no further assumption on dependence is necessary to verify our condition.
Now, beyond boundedness the question is actually rather subtle even if we do not impose any dependence structure. Namely, one can construct regression tasks with independent but _not_ identitcally distributed covariates in which an exponential (in dimension) blow-up in the risk is unavoidable unless the covariances are well-conditioned in some sense. A detailed statement can found as Theorem 6.2 in [1].
We will clarify this following the main Theorem statement and are of course also happy to include further discussion to flesh out these subtleties in an updated version of the manuscript.
[1] Tu, Stephen, Roy Frostig, and Mahdi Soltanolkotabi. "Learning from many trajectories." arXiv preprint arXiv:2203.17193 (2022). | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper gives finite-sample bounds on the excess risk of ordinary least squares regression in the non-realizable case with dependent ($\beta$-mixing) data. The result asymptotically matches the predictions of the central limit theorem. The dependence on the mixing behaviour of the process is relegated to terms of smaller order in $1/{n}$, where $n$ is the length of the observed sample path, so that, apart from absolute constants, the bound asymptotically coincides with those for independent data.
The technical proof decomposes the excess risk as the norm of the product of a weighted random walk and the inverse covariance matrix (prefiltered with the true covariance). These two factors are bounded separately, in both cases using the blocking technique, which has become a standard method when dealing with $\beta$-mixing processes. How it is avoided, that the mixing times enter the asymptotically dominant term of the bound, is already explained in the introduction by the example of Bernstein's inequality. The technical details are a major achievement (I did not have time to verify all of it) and relegated to the appendix.
Strengths: The paper addresses an important and obvious problem and offers a largely satisfactory solution.
The multiplicative dependence on the mixing times is a major problem which besets many bounds for dependent data. It is a major accomplishment to free the dominant term of the bound from this dependence. The illustrative explanation in section 1.1 (lines 44-57) is very nice.
I did not find any faults, but because of time constraints I could not verify all the material in the appendix. Otherwise I would have rated the soundness with 4 rather than 3.
Weaknesses: The statement of Theorem 3.1 is somewhat opaque because of the choice of the blocking partition. Presumably the cardinalities of the partition members have to be different to accommodate the non-stationarity of the process, as specified in eq (3.4).
The theorem would be more transparent if first stated for stationary processes using a homogeneous partition. The more general version could be stated in the supplement.
The most critical "burn-in" condition seems to be the second condition in 3.3, because of its dependence on $\sigma^2$ and $\delta$ and in particular if $s=4$. How shall we interpret the slow-down of the "burn-in" as $\sigma^2$ becomes small? It seems to me that this condition would merit a more detailed discussion.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Suggestions:
Thm 3.1 first stated for stationary processes.
Detailed discussion of the second condition in (3.3)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations seem to be adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort spent on evaluating our manuscript.
* While we agree that there is trade-off between generality, obtuseness and being concise, we prefer to leave the statement of Theorem 3.1 as is. Our motivation for this is that we already have an informal theorem statement which is relatively easy to parse.
* (Relating to the appearance of $\sigma^2$ in the denominator in the second burn-in condition). Note that it is in fact a ratio-squared of the $s$:th moment and the 2nd moment that appears in this condition. This ratio appears because we use higher moments than $2$ to control the deviation of a normalized random walk from its 2nd moment (in principle by Markov's inequality). We will clarify this point in the manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification of the second point.
I find the "informal theorem" a bit too informal. I had hoped for a precise version for the stationary case. | null | null | null | null | null | null |
A Unified, Scalable Framework for Neural Population Decoding | Accept (poster) | Summary: Neural population decoding refers to inferring the behavioral output of organisms from a recording of a population of neuronal recordings. This paper introduces a transformer architecture to improve the accuracy of decoding. The efficiency of the system is increased by representing input spike trains with event-based tokens and by switching to a latent representation (to decrease the size of attention matrices).
In the presence of behavioral data, the model can identify new neurons (units) that are potentially obtained from different experiments. This is done by freezing the parameters of the network during gradient descent except for the encoding of the new units. This is a novel and interesting approach although it depends on the presence of behavioral recordings.
The method is tested on multiple experiments where neural activity recordings are obtained from motor cortical regions of monkeys while the monkeys are engaged in tasks involving arm movements.
Strengths: - A novel framework for neural population decoding
- Efficient representation of the recordings via event-based tokenization
- Identification of units from new experiments in a way that is consistent with previous experiments. i.e., embedding a new unit such that the embedding location is consistent with the locations of other units in previous experiments whose activities/roles are similar to the new unit.
- Use of multiple experiments done in different labs to demonstrate the method.
- The inference accuracy of the method seems to improve significantly upon strong baselines.
Weaknesses: - I oppose alluding to 'foundation models', which have become popular in natural language processing. Importantly, such models are trained in an unsupervised manner and demonstrate emergent abilities to solve multiple downstream tasks. The proposed architecture depends strongly on the presence of joint behavioral recordings. Similarly, it wouldn't make sense to apply this trained model to recordings from, say, the visual cortex.
- [line 165] Are commands given via natural language? If not, please provide mathematical description instead (or in addition).
- One simple experiment to probe the fidelity of "unit identification" could be: (i) take a recording, learn embeddings of units, etc. (ii) shuffle the order of units and repeat the experiment by adding rows to Embed as described in text. (iii) Compare and report the similarity between the two embeddings.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is the relationship between the context window T (1s) and Tmax (4s) used in positional encoding? How does Tmax change with respect to T?
- Could you provide more details on extending the model to self-supervised tasks (mentioned at the end of the paper)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and questions! We are very excited to hear that you found the paper to be “novel” and also appreciate our use of “multiple experiments done in different labs”.
In what follows, we will provide point-by-point replies to your questions.
> 1. “I oppose alluding to 'foundation models', which have become popular in natural language processing. Importantly, such models are trained in an unsupervised manner and demonstrate emergent abilities to solve multiple downstream tasks. The proposed architecture depends strongly on the presence of joint behavioral recordings. Similarly, it wouldn't make sense to apply this trained model to recordings from, say, the visual cortex.”
**Reply:** We agree with the reviewer that our current model wouldn’t be considered a foundation model, and we will clarify that in the text. However, POYO makes important advances towards a foundation model for neuroscience on multiple fronts.
1. Scalability: Building foundation models for neural data requires training on large amounts of data, and a key innovation of our work is developing a framework where integration and training on many sessions is made possible.
2. Pre-training and transfer learning: A key advantage of large language and other foundation models is that they enable fine tuning on smaller datasets. In our experiments, we show how our pretrained models POYO-MP and POYO-1 can be used on a number of diverse recordings from non-human primates, even if the data is from a new animal, their behavioral task is different (Figure 2), and when the preprocessing of the data is different (POYO-MP to NLB-Maze). In our few-shot learning experiments, we show that finetuning can quickly integrate new sessions from different labs with fewer than 32 labeled trials comprising less than 2 minutes of recordings.
3. Democratizing AI: In addition to providing pretrained models and transfer strategies, POYO also can be very easily adapted and trained without extensive hyperparameter tuning (which usually requires expertise), and without need for large compute resources. Unit identification can be done on CPU in a few minutes, or finetuning can be done on a single GPU. We note that the same hyper parameters are used across all our results in table 2, showing its robustness.
Moreover, as noted in the paper, there is no reason that the POYO tokenization scheme and architecture could not be used for self-supervised learning. We will mention this in the text when we raise the issue of foundation models.
> 2. “Are commands given via natural language? If not, please provide a mathematical description instead (or in addition).”
**Reply:** The command is not given via natural language, we will add this for clarity. We will update the text to include a clear mathematical formulation, to avoid any confusion:
A query token is defined by its embedding $y_{0, i} = Embed(session\ id) + Embed(behavior\ id) $ and its timestamp $t_i$.
> 3. One simple experiment to probe the fidelity of "unit identification" could be: (i) take a recording, learn embeddings of units, etc. (ii) shuffle the order of units and repeat the experiment by adding rows to Embed as described in text. (iii) Compare and report the similarity between the two embeddings.
**Reply:** Thank you for suggesting this experiment! We provide an in depth discussion of the results in the General Response (see Table 2). Further visualizations are provided in the attached PDF (See Figure 1).
As you can see, when we tested your idea using POYO-1 weights, we indeed found that the tuned embeddings converge to their original unit embeddings (in a nearest-neighbor sense). We are excited to refine this analysis and include these results in the final paper. Thanks again for your suggestion!
> 4. What is the relationship between the context window T (1s) and Tmax (4s) used in positional encoding? How does Tmax change with respect to T?
**Reply:** This is a great question that we explored in our initial experiments, where we did a sweep over various values of both Tmin and Tmax for POYO-SS (single session) and found that various values of Tmax provided good performance. We hypothesize that as long as Tmax >= T, our model should perform well. We will update appendix C.3 to include these insights.
> 5. “Could you provide more details on extending the model to self-supervised tasks (mentioned at the end of the paper)?”
**Reply:** The PerceiverIO architecture that we build upon is flexible and allows for different predictive targets, which could be adapted to other objectives like the prediction of the firing activity of held out and masked neurons as in the NDT. Contrastive learning can also be used to generate various views (different sub-populations for example) and then learn an embedding that maps these views to similar parts of the latent space (i.e., using BYOL or SimCLR). We plan to add more discussion on these potential directions for future work in the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Thank you for the experiment on unit identification.
Comment: These results will help quantify the extent of unit identification and improve the paper. Thank you.
Thanks also for the other clarifications.
The term "foundation model" is overloaded and means a different thing in NLP. While I agree with the authors' scalability, pre-training, etc. claims, I think considering this as a foundation model (or something along the way to a foundation model) will only further confuse the field. I remain strongly opposed to alluding to it in the main text, except for the Discussion, where those claims could be mentioned and it would be appropriate to suggest that foundation models also have such properties (and more).
I think this is a good paper, and I decided to keep my score at 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive response, and your thought-provoking questions. We definitely agree that the term “foundation models” needs to be discussed in more detail. To avoid any confusion, we have removed the use of “foundation models” in the Results. In the discussion, we will clarify what aspects of our model are novel and of interest to the neuroscience community, and what remains to be done in future work. | Summary: Deep learning and transformer models have shown great promise in identifying structure from large datasets. With recent advancements in neural recording methods, it is now possible to generate rich and heterogeneous recordings from large populations of neurons across multiple brain regions and experimental conditions. The proposed work bridges this gap by introducing a new approach to modeling neural dynamics across these recordings. The authors leverage transformer models by tokenizing neural activity, preserving the temporal structure by treating each spike as a discrete token. This strategy allows training across multiple sessions and individuals. The success of the approach was extensively tested in several datasets across multiple labs, brain regions, and tasks, demonstrating increased decoding performance compared to alternative methods.
Strengths: The paper is presented clearly and is technically sound. The method was tested and shown to work well across multiple datasets. The proposed strategy leverages the success of transformer models and applies it to neuroscience by introducing a novel way to tokenize neural activity. Extracting shared variability across multiple datasets is crucial for neuroscience and biomedical applications, and this approach could provide a framework for analyzing these emerging datasets. The authors demonstrated the success of the approach when trained on multiple neural recordings and tasks, with the ability to pool data across sessions and animals. Decoding results showed that the method outperforms alternative approaches, including lesion versions of the introduced model, when tested within and across sessions and animals. They also investigated the impact of different architectures and numbers of parameters on performance. Importantly, they showed that minimal fine-tuning allows the model to improve decoding performance on other tasks. The proposed strategy could lay the foundation for studying unifying principles of neural computation.
Weaknesses: However, it should be noted that this strategy heavily relies on having access to multiple large neural recordings, which may not be feasible for most basic neuroscience research where datasets are often limited. While the authors tested the method across multiple brain regions and tasks, all of them were motor tasks in motor-related areas. While this choice likely facilitated the identification of shared structure and enabled transfer learning, it would be even more intriguing if this approach could be applied across categorically different tasks, such as sensory encoding and decision-making, spanning from sensory areas to higher cortical regions. This could potentially help identify universal coding principles of neural computation as well as task-specific coding principles related to motor function.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I would be interested in applying this method to even more diverse recordings throughout the brain and tasks to explore universal computational principles.
The models allows for neural variability, but still assumes that all neurons are functionally similar. Could this be a problem when looking at other brain regions? Specially if the proportion of excitatory cells is smaller.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors mention some limitations and future work in their paper. However, it would be important to acknowledge the computational costs, training times, and data demands associated with the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review of the paper! We are very excited to hear that you found the paper to be “presented clearly and is technically sound” and also agree that “extracting shared variability across multiple datasets is crucial for neuroscience”.
In what follows, we will provide point-by-point replies to your questions.
> 1. “it should be noted that this strategy heavily relies on having access to multiple large neural recordings, which may not be feasible for most basic neuroscience research where datasets are often limited.”
**Reply:** We want to clarify two points regarding POYO’s ability to help in settings where data is limited. First, we have a single-session variant of our model (POYO-[single session]) which is not pre-trained on any data but is randomly initialized and trained end-to-end on a single session worth of data. This model, even without access to additional recordings for pretraining, outperforms other single-session methods (Table 2). In addition to working well on single sessions, the method is very sample efficient: In Figure 4, we show that our model can be trained from scratch from as little as 8 trials worth of data (which is less than 2 minutes), and is more data-efficient than a GRU model.
Second, we agree that neural datasets are usually limited; this is a core motivation for our work and the creation of a large-scale pretrained model. This work shows that it is possible to combine multiple heterogeneous datasets that are collected in different labs, for different behavioral tasks, and under different experimental conditions, to provide a pretrained model that can transfer well on new sessions (Figure 3). Hence, we advocate for leveraging such an approach to make best out of available datasets. For example, a smaller lab with limited access to data could take a POYO model trained on much larger, open datasets (such as those available on DANDI) and then fine-tune it on their smaller dataset. We believe that expanding our method to decode various behavioral signals within smaller datasets will be an important objective for future work in order to demonstrate this potential use of POYO.
> 2. “I would be interested in applying this method to even more diverse recordings throughout the brain and tasks to explore universal computational principles. The model allows for neural variability, but still assumes that all neurons are functionally similar.”
**Reply:** The question of across region generalization and applying the model to larger-scale across-region questions is an exciting open area for future work! Even now in POYO-1, there is a lot of diversity in the location and areas for units in the different recordings, which span M1, PMd, and S1 (see Figure 1 in the attached pdf). These regions are long known to have distinct firing properties, cell types, and underlying dynamics. Yet, we are able to capture and exploit the activity of these diverse cells in a common model. We think POYO will be an extremely valuable tool in future work to better understand commonalities and differences in the neural code in different regions, both by testing generalization across regions and by studying the properties of models jointly trained on many regions.
> 3. Could this be a problem when looking at other brain regions? Especially if the proportion of excitatory cells is smaller.
**Reply:** In preliminary experiments, we have started to test the model on datasets from the visual cortex with good success. We plan to expand the model and training datasets in a next publication. In principle, we don’t believe that the number of excitatory cells will be a limiting factor, as long as the model is pre-trained on a diverse set of brain regions with different numbers of excitatory cells.
> 4. The authors mention some limitations and future work in their paper. However, it would be important to acknowledge the computational costs, training times, and data demands associated with the proposed method.
**Reply:** Thank you for bringing this up. We discuss some of the computational requirements of our method in Appendix A.4.1, but we plan to include a paragraph in the discussion on this topic. Currently, we show that our largest model, trained on billions of tokens, uses 8 GPUs over a few days. As we increase the number of datasets and the complexity of the model, we expect that computational requirements will increase, but we are optimistic about recent advances in model scaling that have been enabled by breakthroughs in NLP. In terms of data demands, we have shown that large recordings are not a requirement to achieve performance that is competitive with existing methods, but our results suggest that we get significant boosts when a lot of data is leveraged. In settings where a lot of data has not been collected yet, we think that an important question for future work is whether we can leverage learned patterns of the neural code across various brain regions.
That being said, we see the ability to train on a large-scale model, and use it in new settings via. unit identification or finetuning to be the key objective here, as the large model can be trained once but leveraged many more times. To give a concrete example, we plan to publicly release POYO-1 which can be tuned by anyone that has recordings from motor areas of non-human primates. Since the model is pre-trained, it can be tuned with a few samples (sample-efficient, Fig 4), it converges in a small number of steps (training time, Fig 4), it can be done with a single GPU or even just a CPU in less than an hour (Appendix A.4.1), and it does not require extensive hyperparameter tuning (all our transfer results in Table 2 were obtained using the same hyperparameters despite major differences across datasets), making our model accessible. We will highlight these advantages in the manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed and thorough responses! I still believe that this is a worthwhile contribution to the NeurIPS community. | Summary: This paper introduces a novel method for developing models that can predict the activity of neural populations by learning from data recorded across different sessions, tasks, and animals. The method is uses a custom tokenization procedure for spikes and a deep neural network architecture based on PerceiverIO. The paper presents the core method, then describes a technique to reuse a large pre-trained model to learn efficiently to predict a new neuronal population/condition, and finally presents a series of validation experiments.
Strengths: - the proposed technique is technically solid, elegant, and performs well in the experiments.
- this technique addresses a real need in neuroscience labs, making the paper potentially very high impact.
- I found the scaling analysis (lines 237-257) particularly instructive, and especially relevant because for a new method like this one the reader may wonder what type of performance they may expect on their own data, as a function of the architectural choices they would have to take when deploying the method. This is of course a very hard thing to assess correctly, but this type of analysis is a great starting point.
- the paper is very well written.
Weaknesses: - I have noticed a few minor issues (mostly related to clarity and plots) in section 3.4. See below under "questions".
- The discussion of related literature seems to imply that all spike train analysis has always been done by binning spikes. It would be good to include at least a cursory mention of kernel methods for spikes and other binless measures: see for instance Paiva, A.R.C., Park, I., Príncipe, J.C., 2010. A comparison of binless spike train measures. Neural Comput & Applic 19, 405–419.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - line 277: "we see a bit of a dip in overall accuracy": how much is a bit? is this data shown anywhere?
- in figure 4, the blue/green colors are hard to see in grayscale. Can you choose colors that render better in grayscale?
- in figure 4, the compute efficiency plots don't seem to be cited anywhere in the text?
- line 289: "is competitive with other baselines". What other baselines?
- in the paragraph "Performance on new animals performing new tasks", it is not clear how much data was using for training and how much for testing. Can you clarify?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitation of the present work are appropriately discussed. I see no potential issue with societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review of the paper! We are very excited to hear that you found the proposed technique to be “technically solid and elegant” and also agree that it “addresses a real need in neuroscience labs”.
In what follows, we will provide point-by-point replies to your questions.
> 1. “The discussion of related literature seems to imply that all spike train analysis has always been done by binning spikes. It would be good to include at least a cursory mention of kernel methods for spikes and other binless measures”
**Reply:** Thanks for the suggestion! We will update the related work to include more context and mention other binless methods, such as the paper you cited and related work on kernel methods. When contrasting our spike-based tokenization scheme with other binning-based approaches, we were referring specifically to the deep learning for neural data literature here, where binning has been the approach to-date, because binned representations regularize the structure of spiking data and enable compatibility with existing machine learning frameworks, like MLPs, RNNs, and Transformers.
> 2. “Line 277: "we see a bit of a dip in overall accuracy": how much is a bit? Is this data shown anywhere?”
**Reply:** This is a comment on the results from Table 2. More specifically, the two columns corresponding to Monkey T performing CO and RT tasks respectively. We use POYO-MP, which is pre-trained on multiple sessions from Monkey C and M, and transfer it to the new sessions from Monkey T, using unit identification (meaning that the transformer weights are frozen). We show that despite tuning a small number of parameters (unit embeddings and session embedding), we achieve an accuracy that is only 1% away from the accuracy we obtain when we train all weights from scratch, the latter setting corresponding to POYO-[Single-session]. This is the “dip” we are referring to. We will update the description to be more specific.
> 3. In figure 4, the blue/green colors are hard to see in grayscale. Can you choose colors that render better in grayscale?
**Reply:** Yes, we will update the colors to work well in grayscale in a revision.
> 4. In figure 4, the compute efficiency plots don't seem to be cited anywhere in the text?
**Reply:** Thank you for pointing this out. We have revised the text to include a reference to Figure 4 and will include further discussion on the significance of this result in a final version of the paper.
> 5. Line 289: "is competitive with other baselines". What other baselines?
**Reply:** We are referring to the baselines in Table 2, including the Wiener Filter, MLP, and AutoLFADS. We will make sure to revise this section to make it more clear.
> 6. In the paragraph "Performance on new animals performing new tasks", it is not clear how much data was used for training and how much for testing. Can you clarify?
**Reply:** We use the splits defined in the NLB benchmark. For NLB Maze, there are 1721 training samples and 574 testing samples. For NLB RTT, there are 810 training samples and 270 testing samples.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing the points I raised! The additional work carried out in reply to the other reviewers further strengthens this paper. I confirm my score. | Summary: The authors describe a novel method for the task of neural decoding: using the time series of activity recorded from a population of neurons to predict the activity of scientifically relevant target variables. The describe their approach, called POYO, based upon the tokenization of spike data, the application of transformer based models to this data to generate pre-trained models, and the use of these pre-trained models for neural decoding. They demonstrate that this approach allows for accurate generalization across different sessions of neural recordings from the same animal, and across different animals.
Strengths: The authors present a novel approach for the decoding of neural data, that appears promising in its ability to generalize over different animals and experimental sessions, which is a major problem in neuroscientific data analysis. The performance of their method is impressive, especially in the setting of pretrained models that are finetuned to novel datasets. The ability to combine various datasets in the way described here has the potential to be impactful.
Weaknesses: My main concerns are with regard to scientific rigor, and the scope of the claims made in this work.
- Generalization across brain regions. I believe there needs to be a more thorough discussion of the role of brain regions in tasks where POYO is asked to generalize across individuals with recordings from different brain areas. To what degree will region-specific neural activity impact the ability of POYO to generalize when performing few shot learning? Concretely, what would behavioral decoding results in Table 2 look like if they are broken out per individual brain areas?
- Lines 37-39: "Overall, this lack of correspondence across recordings and channels complicates the integration of information from different experiments and individuals, ultimately hampering efforts to construct a unified perspective on population-level interactions and dynamics in the brain."It is not clear to me how POYO will address these challenges. While it is clear that POYO allows for highly performant decoding, the lack of recurrent structure within POYO makes it difficult to understand how this model would assist these challenges. Perhaps analysis of the attention mechanism could provide some answers to this question, but that point is not discussed in this work.
- Comparison with existing baselines. In Table 1, it is unclear to me why the performance of AutoLFADS + Linear only provided for the NLB datasets. It would be useful to see the performance of this model on all datasets, as it is a more powerful and commonly used decoding model than the others provided. Comparison with other transformer based neural decoding approaches like NDT (Ye and Pandarinath 2021) would also provide a better perspective on the value of the methodological advances proposed in this particular work.
I am open to reconsidering my score if my concerns here are addressed.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - How many steps/epochs are fine tuning and unit identification run for? I was unable to find this information in the appendix.
- Line 55: the phrase "neural scaling laws" is ambiguous. Can it be changed to "scaling laws"?
- Line 349-350: "AutoLFADS extends the LFADS framework with Population Based Training to perform hyperparameter tuning, but has not been demonstrated for multi-session alignment."Is there any reason to believe that AutoLFADS is not capable of performing multi-session alignment? To my knowledge, Auto-LFADS is a optimization routine for LFADS. I would assume that it works for multi-session recordings as well.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: As noted by the authors, generalizing this framework across other brain regions (as well as many other conditions) is a key limitation. In general, I have reservations about the use of the phrase "foundation model" without qualification in the context of neural decoding. Is the suggestion of this work that neuroscientists should aim to build a general purpose model that performs decoding regardless of model organism, brain area, or recording modality? While the results of this study are impressive, I do not believe that such claims are justified, even as future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and questions, and for finding that our work "has the potential to be impactful"! In what follows, you will find our point-by-point response to your main concerns, and results from new experiments that we ran to address your feedback.
Please let us know if there’s anything we can clarify further.
> 1. “I believe there needs to be a more thorough discussion of the role of brain regions in tasks where POYO is asked to generalize across individuals with recordings from different brain areas.”
**Reply:** Thank you for your suggestion. In the final paper, we plan to add a detailed discussion of the functional roles of the areas studied, and provide a new visualization of the recording sites across all the datasets that we study (see Figure 2 in attached PDF). Our current efforts have focused on neural recordings in three regions—M1, PMd, S1—that are intimately involved in planning and executing movements. While their functional role in movement and sensation are unique, many studies including ours have shown that they can all provide information that is used to guide decoding of movement. Yet, while different regions serve specific functions during voluntary movement (e.g., contracting muscles and sensing the state of the limb), they may perform these functions using common principles in the neural code.
> 2. “To what degree will region-specific neural activity impact the ability of POYO to generalize when performing few shot learning?”
**Reply:** The datasets we consider in this paper broadly sample reach-related activity across PMd, M1 and S1. These three regions are highly distinct both functionally and based on cytoarchitecture and anatomy. For example, M1 and S1 have been shown to have highly distinct dynamics underlying their population activity. Furthermore, even within a region such as M1, our recording sites are distributed across varying functional sub-regions (Figure 2) and represent a great deal of diversity in terms of the underlying neural functions and firing patterns measured.
In terms of across-region transfer of the model, we believe it is an important question that requires very thorough investigation, and a careful selection of pre-training and fine-tuning datasets which is outside of the scope of this work. We will make sure to include many of these points in our discussion.
> 3. “Lines 37-39: "[...] The lack of recurrent structure within POYO makes it difficult to understand how this model would assist these challenges.”
**Reply:** While we have not yet tackled this question, as the reviewer notes, analysis of the attention heads provides one possible way of using POYO to understand population-level interactions. We will add some discussion of this possible use to the manuscript.
With regards to the comment that there is a “lack of recurrent structure within POYO”, the transformer architecture that we leverage provide powerful and general building blocks for modeling complex sequential data and time-series. Perhaps we misunderstood your point. If you can clarify, we will do our best to address your concern!
> 4. “In Table 1, it is unclear to me why the performance of AutoLFADS + Linear only provided for the NLB datasets.”
**Reply:** AutoLFADS is one of the main baselines in the NLB benchmark and the authors provide code and hyperparameters for the NLB datasets. Thus, we were able to reproduce these numbers. However, we were not able to obtain good results without further hyperparameter tuning on the other datasets in Table 2.
Since the original submission, we have been able to run our datasets on NeuroCAAS, which is a cloud service containing the AutoLFADS tuning procedure, to obtain results for 4 of the datasets below.
| | Monkey C, CO 10/13 | Monkey C, CO 10/21 | Monkey T, RT 08/20 | Monkey T, RT 09/06 |
| - | - | - | - | - |
| AutoLFADS + Linear | 0.9292 | 0.9519 | 0.4116 | 0.5061 |
| POYO | 0.9603 | 0.9759 | 0.7986 | 0.8306 |
As can be seen, POYO outperforms AutoLFADS on all of the datasets tested, with large gaps on RT. We will continue these runs as they are compute intensive, and we have 14 additional datasets that we report results on in Table 2.
> 5. Comparison with other transformer based neural decoding approaches like NDT would also provide a better perspective on the value of the methodological advances proposed in this particular work.
**Reply:** Thank you for this suggestion. We were able to reproduce NDT on both NLB datasets, and run other transformer models (see Table 1 in the General Response). We extensively tuned the hyperparameters of all baselines to ensure a fair comparison. Our results suggest that POYO (both single-session or pretrained) outperforms other transformer approaches.
** We would also like to note that we attempted to train NDT on the rest of the datasets, but we faced a similar challenge with hyperparameter tuning. We will continue to tune these baselines and hope to include them on all the evaluation data in Table 2.
> 6. How many steps/epochs are fine tuning and unit identification run for?
**Reply:** Fine tuning and unit identification are run for 50 epochs, with a batch size of 128. We have updated the appendix to include these details.
> 7. “Is there any reason to believe that AutoLFADS is not capable of performing multi-session alignment? To my knowledge, Auto-LFADS is a optimization routine for LFADS.”
**Reply:** Auto-LFADS has not been demonstrated in the multi-session condition and code for multi-session HPO has not been released. While, in theory, AutoLFADS can be applied to the multi-session stitching condition, it is unclear how to tackle the initialization of the alignment matrix for each session, or what HPO strategy to use with the presence of multiple datasets.
We also note that multi-session stitching has only been done in sessions that are recorded from the same subject in the same brain region (and same chronic implant) during a stereotyped center-out task.
---
Rebuttal Comment 1.1:
Title: Thank you.
Comment: I thank the authors for their thoughtful response, which clearly demonstrates significant additional work.
I appreciate the discussion of recording sites (points 1 and 2 in the rebuttal), as well as Figure 2 included with the rebuttal. My concern is less about the inherent diversity of the brain regions included, and moreso how the specific sampling of brain regions in each animal affects affected generalization performance. It would be important to reference the information in Figure 2 of the attached pdf with the results from section 3.4, especially regarding generalization to new animals. In particular, I find Figure 2 makes it easier to appreciate that a pretrained model which has not seen recordings from S1 performs well on S1 data (Lines 283-293).
Likewise, I am happy to see the additional comparisons provided by the authors, especially regarding existing transformer based decoding models. I am now more strongly convinced of the specific innovations proposed in this work.
I can be more clear about point 3 in the rebuttal. Models like LFADS fit an explicit recurrent model of neural dynamics corresponding to an observed sequence of neural data, and part of their value is that after fitting, one can study properties of the inferred dynamical system. In contrast, POYO's focus and demonstrated results are pure prediction. While these are complementary approaches to the general problem of neural decoding, it is not clear to me how POYO's approach can be mapped onto insights about neural dynamics.
Finally, I remain concerned about the use of the phrase foundation model, as also mentioned by reviewer Mfux. I would support the suggestion of mentioning the topic in the Discussion, but not elsewhere in the text. I would also ask the authors to devote significantly more effort to clarify in the main text how others can access their POYO models and use them on their own data (i.e. acess to code repository, instructions for working with pre-trained models).
I have updated my score to reflect the changes made in the rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and for updating your score! We are happy that our responses and new experiments addressed your concerns.
Re: point 3 and recurrence. Thank you for the clarification. We will add some discussion and comparisons of POYO with dynamics-based approaches in the related work, and will discuss future interpretability experiments needed to better resolve population-level interactions learned by the model. Thanks again for your questions and suggestions!
Re: “foundation model”. We have removed the mention of “foundation model” in the Results section when describing our multi-lab model. We plan to spend more space in the discussion to unpack the implications of our work and talk about next steps that would be needed to incorporate a self-supervised objective into our multi-session architecture.
Re: “Instructions for using our pretrained models". We are planning to make the code and the models public, and will provide clear instructions in the main text. | Rebuttal 1:
Rebuttal: We would like to thank all of the reviewers for their great feedback and suggestions! The reviewers agreed on the impact of the work and acknowledged the innovations behind the work for multi-session neuroscience.
Some highlights and praise from the reviewers:
- **Method and Approach:** “the proposed technique is technically solid, elegant, and performs well in the experiments.” (Vmcn) “a novel way to tokenize neural activity” (BY6C) “Efficient representation of the recordings via event-based tokenization” (Mfux)
- **Impact:** “The ability to combine various datasets in the way described here has the potential to be impactful.” (c1LQ) “this technique addresses a real need in neuroscience labs, making the paper potentially very high impact.” (Vmcn) “a major problem in neuroscientific data analysis” (c1LQ) “Extracting shared variability across multiple datasets is crucial for neuroscience” (BY6C) “proposed strategy could lay the foundation for studying unifying principles of neural computation”
- **Scaling analysis and rigor of experiments:** “I found the scaling analysis (lines 237-257) particularly instructive” (Vmcn) “Use of multiple experiments done in different labs” (Mfux) “accuracy of the method seems to improve significantly upon strong baselines” (Mfux)
- **Writing and Presentation:** “the paper is very well written.” (Vmcn) “The paper is presented clearly and is technically sound.” (BY6C)
Based upon reviewer comments, we ran a number of new experiments, and are currently working on the following revisions to our original submission:
- **Additional Baselines and Evaluations (c1LQ):** In response to rev (c1LQ)’s comments about including further single-session baselines, we conducted a number of new experiments to compare with the requested baselines (AutoLFADS, NDT [1]), in addition to a supervised-variant of NDT and another supervised neural data transformer baseline, EIT [2]. Across the board, we find that POYO-[Single-session] outperforms the other transformer-based approaches that use binning-based tokenization of the neural activity. Finetuning POYO-MP provides even further improvements in performance beyond the single-session models in the RTT task, demonstrating the power of having more data to pretrain on.
| | NLB-Maze | NLB-RTT |
| - | - | - |
| NDT + Linear | 0.8929 | 0.5895 |
| NDT-Supervised | 0.8708 | 0.4621 |
| EIT | 0.8791 | 0.4691 |
| AutoLFADS + Linear | 0.9062 | 0.5931 |
| POYO-[Single-session] | 0.9470 | 0.6850 |
| POYO-MP | 0.9466 | 0.7318 |
_Table 1: Behavioral decoding performance on NLB Datasets._
- **Neuron shuffling and identification experiment (Mfux):** In response to reviewer Mfux’s great suggestion to conduct a neuron shuffling experiment, we ran unit-identification on a session that was already seen during pre-training. We test whether finetuning on this “new” set of units, will map the “new” units close to their true embedding (found during pre-training). We use our largest model, POYO-1, start with randomly initialized unit embeddings and run unit-identification. To compare the newly calibrated set of units with its pre-trained version, we normalize the unit-embeddings and report the cosine similarity averaged over the set of units. Note that normalization is done because POYO’s first cross-attention layer applies layer norm to these embeddings. We also report the “unit identification accuracy” which we define as the ratio of tuned unit embeddings that have their true unit embedding as their nearest neighbor.
| Dataset | # of units | Unit-Identification Accuracy | Unit-Identification Cosine-Similarity |
| - | - | - | - |
| Monkey C, CO, 2013/10/03 | 73 | 1.000 | 0.845 |
| Monkey M, CO, 2014/02/03 | 116 | 0.966 | 0.802 |
_Table 2: Unit re-identification results._
We include, in Figure 1 of the accompanying pdf, a visualization of the unit embeddings over multiple training steps during the unit-identification stage. We show how the new unit embeddings, which are initially random, converge towards their true embeddings, determined during pre-training.
Overall, this analysis suggests that the unit-identification approach is robust and reliable for identifying units. We believe that understanding the functional similarities of units that are mapped close to each other in the unit embedding space will be an interesting avenue for future work.
**Impact of the work:** The proposed framework achieves a number of new firsts. POYO is the first to integrate and decode population activity across multiple animals using a unified, common model, and is also the first to jointly integrate datasets from different labs and behavioral tasks. The diversity of datasets that we have demonstrated our approach on is staggering: we have data from 9 nonhuman primates and over 150 sessions, train on diverse datasets from different spike sorting algorithms and threshold crossings, and across diverse behavioral tasks from 4 different research labs that are measured and executed through completely different manipulandum and at different sampling rates.
---
References:
[1] Ye et al., "Representation learning for neural population activity with Neural Data Transformers." Neurons, Behavior, Data analysis, and Theory 2021.
[2] Liu et al., “Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers” NeurIPS 2022
Pdf: /pdf/360da2116cbe3c8376ea01cbb11a5393bb2ce8bc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
CEIL: Generalized Contextual Imitation Learning | Accept (poster) | Summary: This paper aims to develop a simple and scalable IL method, which is applicable to a wide range of IL settings (e.g., offline/online, LfD/LfO, etc.). To achieve this, the imitation policy are decoupled into a contextual policy and a latent variable. Experiments on a wide range of tasks illustrate effectiveness of the proposed methods.
Strengths: 1. This paper is well written and easy to follow. I enjoy reading it.
2. Authors conduct extensive experiments. It is a solid work.
3. I notice that authors present a detailed implementation details on baselines (like GAIL, GAIfO, et.) in the appendix, which will serve as a good reference for imitation learning researchers.
4. The proposed framework is simple and compatible with many different IL settings.
Weaknesses: 1. Results of online HalfCheetah seems to be lost.
2. Adding comparisons with recently popular (decision-)transformer-style methods in their compatible settings (e.g., [1] for offline IL, [2] for offline multi-task IL, [3] for one-shot IL and LfO) will make this work more promising.
3. The proposed method, CEIL, is not a plug-in method for every IL setting. We still need to make considerate modifications and designations when meeting new tasks. And a lot of works upon the generative adversarial imitation learning (GAIL) framework has been proposed to deal with different IL settings and they work well (e.g., [4] for cross-domain IL, [5] for one-shot IL). A further discussion on the superiority of this work over existing GAIL-style works will be appreciated.
[1] Carroll, Micah, et al. Unimask: Unified inference in sequential decision problems. arXiv preprint arXiv:2211.10869 (2022).
[2] Furuta, Hiroki, Yutaka Matsuo, and Shixiang Shane Gu. Generalized decision transformer for offline hindsight information matching. arXiv preprint arXiv:2111.10364. 2021.
[3] Xu, Mengdi, et al. Hyper-decision transformer for efficient online policy adaptation. arXiv preprint arXiv:2304.08487 (2023).
[4] Franzmeyer, Tim, Philip Torr, and João F. Henriques. Learn what matters: cross-domain imitation learning with task-relevant embeddings. Advances in Neural Information Processing Systems 35 (2022): 26283-26294.
[5] Yu, Lantao, et al. Meta-inverse reinforcement learning with probabilistic context variables. Advances in neural information processing systems 32 (2019).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. I read the derivation of Equation (6) in the appendix and get confused on the constant $C$. If I am not wrong,
$$
C = \mathbb{E}_{p(z^*)\pi_E(\tau)}\left[\log \frac{\pi_\theta(\tau)}{\tau_E(\tau)}\right] = \mathbb{E}_{p(z^*)\pi_E(\tau)}\left[\log \frac{\mathbb{E}_{p(z^*)}[\pi_\theta(\tau|z^*)]}{\tau_E(\tau)}\right],
$$
how can it be a constant to $z^*$? By the way, what do you mean by $p(z^*)$? $z^*$ is a single variable not a distribution that you plan to optimize, right?
2. Considering the one-shot IL setting and your claim in line 246-248, I wonder how you guarantee that $f_\phi$ will generate a proper latent embedding? Will it be trained in a meta-style when meeting one-shot IL tasks?
I am willing to raise my score if you can answer my questions and solve my concerns. Thanks in advance.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: This works does not seem to have any negative societal impact. Authors discussed its limitations and planned to leave them for future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and thoughtful feedback. We will address your concerns one by one.
**Q1: results of online HalfCheetah.**
**A1:** Thank you for the suggestion. We have provided online HalfCheetah results, Figure 3, and more comparison results in Atari and Adroit domain, Figure 1, in the PDF file in the general response and we will add it in our revision.
**Q2: comparisons with recently popular (decision-)transformer-style methods.**
**A2:** Thank you for the suggestion. We have conducted new comparison against Unimask and GDT (generalized decision transformer). Please refer to the PDF file (Table 2) in the "global" response. We can see that CEIL consistently performs better than or comparable to Unimask and GDT.
**Q3: the superiority of this work over existing GAIL-style works.**
**A3:** Thank you for raising this concern. By comparing CEIL with the plain expert matching objective $\min_\theta D(\pi_\theta(\tau), \pi_E(\tau))$ in existing GAIL-style works, we highlight two merits: 1) CEIL’s expert matching loss does not account for updating $\pi_\theta$ and is only incentivized to update the low-dimensional latent variable $z*$, which enjoys efficient parameter learning similar to the prompt tuning in large language models, and 2) we learn $\pi_\theta$ by simply performing supervised regression, which is more stable compared to vanilla inverse-RL/adversarial-IL (GAIL-style) methods.
**Q4: the constant C in the derivation in the appendix.**
**A4:** Thank you for the careful review. It is a typo. In Equation 6 and its derivation in the appendix, 'D_{KL}' should be 'D', where we defined 'D' as the sum of ‘D_{KL}’ and inverse 'D_{KL}' (see Footnote 3 in Page 5, main paper). Therefore, there is no constant C in the corresponding derivation. We will correct it. Thank you.
**Q5: $z\*$ is a single variable in the implementation, right?**
**A5:** Yes, it is.
**Q6: Will it be trained in a meta-style when meeting one-shot IL tasks?**
**A6:** In response to your concern, we are actually training in a meta-style. Intuitively, Equation 3 treats each trajectory individually as a separate task (a separate z for each trajectory/task), in the same spirit of meta-learning. As the reviewer points out, our approach can also be adapted to multi-task IL and one-shot IL tasks, benefiting from the separating optimization of $\pi_\theta$ and $z*$.
As for the generalization, how well the model generalizes depends on the (multi-task) data distribution $D(\tau)$.
Thanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of our paper.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks to your response, which generally solves my concerns.
I notice that recently there emerge some works on optimizing a contextual variable in reinforcement learning, thus extending this idea to imitation learning is natural. I appreciate the contribution of this work, but as I pointed in Weakness 3, there may be still a lot of work to do to figure out how to get $z^*$ when meeting different problem settings, leaving space for further investigation.
I raise my score from 5 to 6. | Summary: The objective of this work is to create an imitation learning (IL) method that functions effectively across a variety of common IL settings, which include online, offline, Learning from Demonstrations (LFD), Learning from Observation (LFO), and cross-domain. To achieve this, a dual-level expert matching goal is proposed. This involves not only trajectory matching with expert demonstration but also matching the latent variable, z, with the expert in latent space.
Contrasting with traditional trajectory matching methods, matching expert trajectories in latent space appears to more accurately replicate expert behavior. Experimental results reinforce the efficacy of this methodology, demonstrating good performance across four different MuJoCo tasks examined in this study.
Strengths: - The proposed method can adjust to five different Interactive Learning (IL) settings with mere minor adjustments in the algorithm, lending the model practical applications in real-world scenarios.
- The document provides an extensive review of existing methodologies across these five IL settings, thereby allowing for a broad understanding of the field.
- Additionally, by comparing the suggested methods against multiple baselines, good results have been demonstrated.
The presentation of the paper communicates ideas clearly.
Weaknesses: - The claim that all Online, Offline, LFD, LFO, and cro-domain issues are addressed appears to be overly ambitious. Given that only two MuJoCo tasks were evaluated in an online setting and four in an offline setting, the experimental evidence provided is inadequate to substantiate this claim.
- Furthermore, the conducted ablation study does not provide sufficient insights into the effectiveness of the J_MI term, which is designed to align with the expert demonstration in the latent space. To magnify our understanding of J_MI's contribution, an additional ablation baseline CEIL without the J_MI term could be quite useful. If, subsequently, we find that the removal of J_MI leads to a performance reduction of 10%, 30%, or even more, we can then quantify the effectiveness of matching expert demonstrations in the latent space with the specific technique used in this work. Providing such a comparison can significantly enhance the reader's understanding of the extent of performance improvements achieved through the application of J_MI.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: ''Offline results generally outperform online results, especially in the LfO setting. ''
Q1: Can you provide more details on this particular issue? Under similar conditions with the same volume of expert demonstration data, the online setting is generally capable of accessing more information. Therefore, it's not quite clear why the proposed method would underperform in an online setting.
Q2: Similar to bi-level learning methods such as GAIL and IRLs, they first learn a discriminator or reward function, followed by policy training based on the learned discriminator or reward function. Frequently, this type of structure tends to struggle with instability and prolonged training durations. Does the proposed bi-level training process encounter similar issues and why?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and thoughtful feedback. We will address your concerns one by one.
**Q1: the experimental evidence.**
**A1:** We have carried out new experiments in Atari and Adroit domains (see results, Figure 1, in the PDF file in the "global" response). We can see that CEIL consistently achieves better or comparable performance compared to baseline methods in both online and offline IL tasks.
**Q2: an additional ablation baseline CEIL without the J_MI term.**
**A2:** Thank you for such a valuable suggestion for improving this paper. We have carried out new ablation experiments on the J_MI term in both online IL and offline IL settings (see results, Figure 2 and Table 1, in the PDF file in the general response). We can see that ablating J_MI does lead to degraded performance, further verifying the effectiveness of our expert matching objective in the latent space.
**Q3: offline results generally outperform online results, especially in the LfO setting.**
**A3:** Yes. In Appendix 2.6 (supplementary material), we also find this limitation in the online LfO setting. We believe this is due to the lack of explicit exploration bonus, which causes the agent to stay in the collapsed state region and therefore deteriorates performance. To address this limitation, here we explicitly impose a lower bound regularization on the policy entropy to encourage exploration, borrowing the idea from SAC and online Decision Transformer [1]. We implement such a regularization with the Lagrangian relaxation method. We refer the reviewer to our new results (Figure 3) in the PDF file in the "global" response, where we find such a simple exploration strategy can effectively bridge the gap between online IL and offline IL, improving the online LfO performance and outperforming online IL baselines.
[1] Zheng, Qinqing, Amy Zhang, and Aditya Grover. "Online decision transformer." international conference on machine learning. PMLR, 2022.
**Q4: Does the proposed bi-level training process encounter similar issues (in GAIL and IRLs) and why?**
**A4:** No, it does not. Many existing (GAIL or IRLs) methods are difficult to train in practice due to an adversarial optimization process over reward and policy approximators (biased or high variance gradient estimators). Another challenge in the GAN-style objective is balancing the performance of the generator (policy) and discriminator. A discriminator that achieves very high accuracy can produce relatively uninformative gradients, but a weak discriminator can also hamper the policy’s ability to learn. However, CEIL learns (a contextual) policy by simply performing supervised regression, which is more stable compared to vanilla GAIL and IRLs. Further, to recover the expert behaviors, CEIL's expert matching objective is optimized over the representation space, which does not depend on the accuracy of the policy. This benefit is even more significant in cross-domain IL settings, since we only need a good representation to characterize trajectories.
Thanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of our paper.
---
Rebuttal Comment 1.1:
Title: Reply to Author
Comment: Thank you for the rebuttal! My concerns have been addressed. | Summary: This work proposed ContExtual Imitation Learning (CEIL), a general method that can be applied to multiple settings, including learning from observations (LfO), offline IL, cross-domain IL, and one-shot IL. CEIL incorporates the hindsight information-matching principle within a bi-level expert matching objective, which decouples the learning policy into a contextual policy and an optimal embedding. Empirical analysis demonstrates that CEIL is more sample efficient in online IL and performs well in offline IL settings.
Strengths: 1. CEIL is closely related to hindsight information-matching methods. CEIL introduces an additional context variable $z$ to learn a contextual policy $\pi_\theta(a|s, z)$ and an optimal contextual variable $z^*$. The idea is to use the learned $z^*$-conditioned policy $\pi_\theta(a|s, z^*)$ to recover the expert data. CEIL is learned through a bi-level expert matching objective: explicitly learn a hindsight embedding function in the inner-level optimization, and perform expert matching via inferring an optimal embedding in the outer-level optimization. Such a decoupling procedure enables CEIL to generalize to diverse IL settings.
2. Unlike the prior hindsight information-matching methods, CEIL does not require explicit handling components such as explicit rewards in online RL and handcrafted target return in offline RL.
3. CEIL is a scalable method that can be applied to LfD, LfO, offline IL, cross-domain IL, and one-shot IL. CEIL was evaluated in diverse settings in the experiments. The results on four MuJoCo environments show that CEIL achieves better sample efficiency than other baselines in the online IL tasks. Extensive ablations demonstrated that CEIL is not sensitive to the number of demonstrations and window size of trajectory.
Weaknesses: 1. It seems that the pre-defined return $f_R(\tau)$ in Equation (2) is closely related to the hindsight embedding function $f_\phi(\tau)$ in Equation (3). However, their relationship has not been discussed.
2. In the cross-domain online LfO experiments (Hopper, Figure 2(d)), CEIL performs worse than AIRL (state only). There is no explanation for this result.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the relationship between the return $f_R(\tau)$ in Equation (2) and the hindsight embedding function $f_\phi(\tau)$ in Equation (3)?
2. In Equation (7), what is the motivation for applying stop gradient operation to $f_{\bar{\phi}}$? Is there any guarantee that this operation will satisfy the support in Equation (5)?
3. Why does CEIL perform worse than AIRL in the cross-domain online LfO task (Hopper, Figure 2(d))?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. CEIL lacks explicit exploration bounds; thus, the offline results generally outperform the online results, especially in the LfO setting.
2. The trajectory self-consistency cannot be applied to cross-embodiment agents once the two embodiments/domains have different state spaces or action spaces.
---------------------
After rebuttal
---------------------
Thanks to the authors for their rebuttal. I'd like to increase my score to 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for the detailed and thoughtful feedback. We will address your concerns one by one.
**Q1: the relationship between the pre-defined return $f_R(\tau)$ in Equation 2 and $f_\phi(\tau)$ in Equation 3.**
**A1:** Compared to the pre-defined return $f_R(\tau)$, $f_\phi(\tau)$ can be seen as a more general formulation representing the hindsight information of a trajectory. $f_\phi(\tau)$ could be any function of a trajectory that captures some statistical properties in state-space or trajectory-space, such as sufficient statistics of a distribution, such as mean, variance or higher-order moments [1]. Empirically, we also find that trajectories with high $f_R(\tau)$ are closer to the encodings of expert trajectories in the latent space (as implied by Figure 1 in the main paper). Thank you for the valuable suggestion, we will add new discussions in our paper revision.
[1] Furuta H, Matsuo Y, Gu S S. Generalized decision transformer for offline hindsight information matching[J]. arXiv preprint arXiv:2111.10364, 2021.
**Q2: a stop gradient operation to $f_{\bar{\phi}}$ in Equation 7. The guarantee for the support constraint.**
**A2:** In Equation 7, we are optimizing $z*$, thus we add the stop gradient operator to $f_{\bar{\phi}}$. The guarantee for the support constraint comes from BCQ [2]. Intuitively, Equation 7 minimizes the distance of selected $z*$ to the embeddings of the batch data ($\tau_E$ and $\tau_D$), forcing $z*$ towards behaving close to a subset of the offline and expert behaviors. The main difference is that BCQ implements offline support constraint over the action space while we restrict it over the latent space.
[2] Fujimoto S, Meger D, Precup D. Off-policy deep reinforcement learning without exploration[C]//International conference on machine learning. PMLR, 2019: 2052-2062.
**Q3: Why does CEIL perform worse than AIRL in the cross-domain online LfO task (Hopper, Figure 2(d))?**
**A3:** As implied by the reviewer, we believe that this is due to the lack of explicit exploration bonus, which causes the agent to stay in the collapsed state region and therefore deteriorates performance. To address this limitation, we have explicitly imposed a lower bound regularization on the policy entropy to encourage exploration, borrowing the idea from SAC and online Decision Transformer [3]. We implement such regularization with the Lagrangian relaxation method. We refer the reviewer to our new results (Figure 3) in the PDF file in the general response, where we find such a simple exploration strategy can effectively bridge the gap between online IL and offline IL, improving the online LfO performance and outperforming online LfO baselines.
[3] Zheng, Qinqing, Amy Zhang, and Aditya Grover. "Online decision transformer." international conference on machine learning. PMLR, 2022.
**Q4: the two embodiments/domains have different state spaces or action spaces.**
**A4:** We appreciate your concern about the state/action space. If cross-domain agents have different action/state spaces, a typical approach is to serialize state/action from different modalities into a flat sequence of tokens [4]. We remark that CEIL is also compatible with such a tokenization approach, thus suitable for IL tasks with different actions/state spaces, yet it is out of the scope of the current work.
[4] Reed S, Zolna K, Parisotto E, et al. A generalist agent[J]. arXiv preprint arXiv:2205.06175, 2022.
Thanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of our paper.
---
Rebuttal Comment 1.1:
Title: Reply to author
Comment: Thank you for the rebuttal. I would like to increase my score to 6. | Summary: This paper presents a method that aims to address Imitation Learning (IL) tasks by simultaneously updating the embedding function of a contextual variable, an optimal contextual variable, and a policy conditioned on that variable. The proposed method learns the conditional policy by minimizing the trajectory self-consistency loss based on the concept of hindsight information matching. The optimal contextual variable is updated by minimizing the discrepancy between the learned conditional policy and the expert policy. The embedding function is optimized based on both the self-consistency loss and the discrepancy loss. The experimental results demonstrated improved performance in the following tasks: (1) learning from observations (LfO), (2) online/offline IL, (3) cross-domain IL, and (4) one-shot generalization IL.
Strengths: 1. The proposed method is novel, and enables solving a variety of IL tasks with minimal adjustments.
2. The experiments were conducted on 8 different IL settings and outperformed previous baselines in most environments.
3. The empirical analysis in Section 5.2 offers interesting insights into the practical application of the proposed method.
Weaknesses: 1. A number of hyperparameters are introduced in this work, such as the embedding dimension, the trajectory window size, the architecture of the encoder/decoder networks. However, the authors do not provide any guidance or recommendations regarding the selection of these hyperparameters.
2. There appears to be a disparity between the theoretical objective (Eq.(4), Eq.(5)) and the practical objective (Eq.(8), and Line 6 in Algorithm 1) when it comes to optimizing the hindsight embedding function $f_{\phi}$. Theoretically, the update of $f_{\phi}$ should solely be based on the trajectory self-consistency loss, as mentioned in Eq.(4). In practice, however, $f_{\phi}$ is also updated according to Eq.(8).
3. The notation for the regularization losses is ambiguous. Specifically, there are two regularization losses used in this work, one for regularizing the embedding function $f_{\phi}$ (Eq.(9)), and another for regularizing the optimal contextual variable $\mathbf{z}^*$ (Eq.(7)). Unfortunately, both of these losses are represented by the symbol $\mathcal{R}$, and their definitions are scattered throughout the manuscript, resulting in unnecessary confusion.
4. The confidence intervals are not reported in Table 2-4.
5. The expert trajectories generated by SAC used in the experiments have not been provided, which could potentially pose challenges for replicating the results in future studies.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. As mentioned in Weakness (1), how are the hyperparameters chosen in the experiments? Could the authors offer guidance on the process of selecting the hyperparameters (those in Appendix Table 4)?
2. As stated in Weakness (2), could the authors provide ablation studies and further clarification on the optimization of $f_{\phi}$ as described in Eq.(8)?
3. As mentioned in Weakness (3), could the authors clarify that there exist two regularization losses in this work? If possible, it would be best to include both $\mathcal{R}(f_{\phi})$ and $\mathcal{R}(\mathbf{z}^*)$ in Eq.(5). Alternatively, the authors could at least mention (in Line 164-165) that $\mathbf{z}^* \in f_{\phi}\circ\mathrm{supp}(\mathcal{D})$ is achieved through minimizing $\mathcal{R}(\mathbf{z}^*)$. It is acceptable to use the symbol $\mathcal{R}$ to represent both losses, as long as it is straightforward for readers to distinguish between the two.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Minor suggestions:
- Section 3.1, Line 93: The transition dynamics function should be $\mathcal{T}$, not $\mathcal{P}$.
- Section 3.1, Line 98: The transition dynamics function should be $\mathcal{T}$, not $T$.
- Section 3.2, Line 124: Missing citation for hindsight experience replay (HER) [[1]].
- Algorithm 1, Line 4: The two $\mathcal{D}$ here can be combined into a single $\mathcal{D}$ for simplicity.
[1]: https://arxiv.org/abs/1707.01495
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and thoughtful feedback. We will address your concerns one by one.
**Q1: guidance on the process of selecting the hyperparameters in the Appendix.**
**A1:** For the size of the embedding dictionary, we selected it from a range of [512, 1024, 2048, 4096]. We found 4096 to almost uniformly attain good performance across IL tasks, thus selecting it as the default. For the size of the embedding dimension, we tried four values [4, 8, 16, 32] and selected 16 as the default. For the trajectory window size, we tried five values [2, 4, 8, 16, 32] but we did not observe a significant difference in performance across these values. Thus we selected 2 as the default value. For the learning rate scheduler, we tried the default Pytorch scheduler and CosineAnnealingWarmRestarts, and found CosineAnnealingWarmRestarts enables better results (thus we selected it). For other hyperparameters, they are consistent with the default values of most RL implementations, e.g. learning rate 3e-4 and the MLP policy.
**Q2: ablation studies on the optimization of $f_\phi$.**
**A2:** Thank you for such a valuable suggestion. We have carried out new ablation experiments on the loss of $f_\phi$ in both online IL and offline IL settings (see results, Figure 2 and Table 1, in the PDF file in the "global" response). We can see that ablating the $f_\phi$ loss (optimizing $f_\phi$ with Equation 5) does degrade the performance in both online and offline IL tasks, demonstrating the effectiveness of optimizing $f_\phi$ with Equation 8. Intuitively, Equation 8 encourages the embedding function $f_\phi$ to be task-relevant, and thus we use the expert matching loss to update $f_\phi$.
**Q3: the notation for the regularization losses.**
**A3:** We appreciate your suggestion. Actually, the support constraint is achieved through minimizing $R(z*)$. There is only one regularization (for cross-domain IL) in this work. We will clarify it in our paper revision.
**Q4: confidence intervals, expert data, and other suggestions.**
**A4:** Thank you for the valuable suggestions. We will incorporate/elaborate on them in our revision.
Thanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of our paper.
---
Rebuttal Comment 1.1:
Comment: I've read the authors' rebuttal and will maintain my score of 6. | Rebuttal 1:
Rebuttal: Dear reviewers,
Thank you for all of your constructive suggestions, which have helped us improve the quality of our paper.
This general response provides a summary of the experimental requirements you suggested. Below, we list the content of each chart presented in the submitted PDF.
+ Figure 1: New experimental results in Atari and Adroit domains. [Reviewers vVku and 2c9A.]
+ Figure 2 and Table 1: Ablation studies on the optimization of $f_\phi$ and the objective of J_MI. [Reviewers PiwE and 2c9A respectively.]
+ Figure 3: Adding exploration bonus for C-on-LfO tasks. [Reviewers 7vt9 and 2c9A.]
+ Table 2: Comparison with respect to transformer-style methods. [Reviewer STjS.]
Thanks again for reviewing our submission, we do not take it for granted. We hope we have resolved all of your concerns. We are always willing to answer any of your further concerns.
Pdf: /pdf/6725b870f6195766cc46137988c50615068d1e61.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: One recent idea in reinforcement learning is to learn a sequential model that can predict state-action transitions and rewards, and then obtain a strong policy by inferring actions conditioned on high reward. This has the major benefit that even low-reward trajectories provide useful information for learning.
The challenge in adapting this idea to imitation learning is that in imitation learning, we do not have access to rewards, but rather a dataset of expert demonstrations, plus (potentially) datasets of other suboptimal behavior (e.g. rollouts from a policy in the online setting, static dataset from suboptimal policies in the offline setting).
The proposed solution is to simply infer / learn a contextual variable $z$ that performs a similar role as reward: that is, it determines the “type” of policy / actions that make up the trajectory. To avoid $z$ from encoding too much information, we make it very limited and regularize it. We can then train a $z$-conditional policy that can produce actions that mimic both the suboptimal behavior as well as the expert demonstrations (using a self-consistency loss). We also learn an optimal setting $z^*$ that selects the high-performing policy, by ensuring that the $z^*$-conditional policy produces trajectories with high similarity to the expert trajectories. There is a significant amount of math to flesh this out, which I won’t get into.
The authors test their method on four MuJoCo environments, using expert demonstrations from a SAC policy, in both online / offline settings, in both full demos / observation-only settings, and with / without a distributional shift (modifying the torso length) in the test environment, and show that CEIL tends to match or improve upon the results from a variety of baselines.
Strengths: 1. The area of imitation learning is important and relevant, and the approach suggested covers a wide variety of use cases.
2. The idea proposed is conceptually simple: train a network that models a variety of policies, and then select an appropriate high-performing policy through the use of context variable that controls the policy. Similar ideas have seen significant success in reinforcement learning.
3. The empirical evaluations are quite extensive with an impressive number of baselines, and show strong performance of the author’s method in a wide variety of settings.
4. I particularly appreciated Table 1, which provides an excellent overview of related work.
5. While I found the paper hard to read in an absolute sense, I think that is mostly because the ideas are quite technical: relative to other imitation learning papers focused on expert matching, I found this paper easier to read and understand.
Overall, I recommend accepting the paper. However, I should note that I am not very familiar with the imitation learning literature, and so cannot provide an evaluation of the following aspects:
1. How novel / original these ideas are
2. Whether the baselines chosen are appropriate (e.g. perhaps the paper has not compared to current SOTA)
3. Whether the performance of baselines is in line with expected numbers (e.g. perhaps the authors did not tune hyperparameters of baselines well)
Weaknesses: 1. The empirical evaluations are entirely based on MuJoCo. It is unclear whether the strong performance will generalize to very different settings (e.g. Atari).
2. Though the paper aims to provide a general and broadly applicable imitation learning algorithm, it doesn’t tackle the most typical imitation learning setting: where we have access only to a dataset of expert demonstrations. (In particular, even in the offline setting, CEIL assumes access to a dataset of suboptimal behavior.) In principle we could run the algorithm in such a setting, though my guess is that it will not perform as well as baseline methods like behavior cloning, since the major benefit of CEIL is in its ability to leverage suboptimal data. (However, the expert-demos-only setting tends to be very vulnerable to spurious correlations / overfitting, and so is not as significant as the other settings.)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How would you expect CEIL to work in settings other than MuJoCo?
Have you run CEIL with access only to expert demonstrations? How does it perform in that setting?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper should mention that it does not tackle the setting in which we only have expert demonstrations (even in the offline setting, the paper assumes access to a static dataset of suboptimal behavior, specifically D4RL in its experiments).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and thoughtful feedback. We will address your concerns one by one.
**Q1: whether the strong performance will generalize to very different settings (e.g. Atari).**
**A1:** As suggested by the reviewer, we have carried out new experiments on Atari and Adroit domains. We refer the reviewer to our new results (Figure 1) in the PDF file in the general response. We can see that CEIL consistently achieves better or comparable performance compared to baseline methods in both online (Atari) and offline (Adroit) IL tasks.
**Q2: run CEIL with access only to expert demonstrations.**
**A2:** Thank you for the suggestion. In single-domain IL tasks (with only expert demonstrations), CEIL and BC (behavior cloning) are essentially the same, since Equation 5 is all about fitting expert demonstrations. However, if we consider cross-domain IL settings (with expert source and target data), CEIL can effectively use the source domain data to learn the contextual behavioral policy, and then use the expert target data to fit $z*$. Intuitively, although the source data is expert in the source domain, it can also be viewed as sub-optimal data for the target environment, thus CEIL can effectively utilize the cross-domain data to improve the performance. However, simple BC cannot do this.
Here, we also run CEIL with access only to expert-level behaviors in the offline cross-domain IL settings and compare it to three baselines: 1) *BC over target* denotes that we train BC agent only over the target data; 2) *BC over source+target* denotes that we train BC agent over the combined source and target data; 3) *BC Fine-tuning* denotes that we first train BC agent over the source data and then fine-tune the agent over the target data.
| |BC over target |BC over source+target |BC Fine-tuning |CEIL|
| ---- | ---- | ---- | ---- | ---- |
|Hopper |46.5 |37.8 |69.9 |**93.7** |
|Halfcheetah |15.4 |11.5 |44.3 |**48.1** |
|Walker2d |97.6 |92.5 |104.2 |**111.8** |
|Ant |72.6 |72.0 |83.1 |**95.0** |
We can see that both *BC over source+target* and *BC over target* perform poorly (with 5 expert demonstrations in target), and *BC Fine-tuning* can slightly improve the performance. Our CEIL can effectively achieve the best performance.
Thanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of our paper.
---
Rebuttal Comment 1.1:
Title: Thanks for the additional experiments!
Comment: I have read the authors' rebuttal and the other reviews, and am maintaining my score of 7. | null | null | null | null | null | null |
Riemannian SAM: Sharpness-Aware Minimization on Riemannian Manifolds | Accept (poster) | Summary: This paper takes the popular idea of SAM from Euclidean space to the Riemannian space, and proposed a new optimization framework known as Riemannian SAM, which is a generalization of an existing technique. At the same time, the authors provided a convergence analysis of this newly proposed framework.
Strengths: 1. The authors proposed a general framework that is derived from first-principles, making this paper easy to follow, and provides users with sufficient background to understand.
2. Algorithm 1 is presented nicely, and makes a lot of sense under the SAM framework.
3. The convergence analysis is much needed, as it ensures readers that the new Riemannian SAM is not that much more worse than the Euclidean SAM.
4. Experiments are convincing, and are performed appropriately.
Weaknesses: 1. We need a bit more motivation on why riemannian optimization needs SAM, as some optimization problems can be solved both in the euclidean space and the Riemanian space (say matrix sensing), and if they have similar guarantees, users would obviously avoid constrained optimization. My understanding is that for these problems, the additional structures that Riemannian optimization brings is already important, and SAM might not help a lot. I understand the authors tried to make this point by using the two examples, but it is better to have some theoretical explanations. Also this paper would benefit a lot from explaining why (7) is the best SAM formulation for Riemannian optimization, as it just seems like a direct transport form the euclidean problem.
2. Not a lot of new insights and or proof techniques are proposed in this paper, as it seems like the proof procedures are very much alike the euclidean problem, and only adapted to the riemannian regime by assuming a chain of conditions (C-1) to (C-5). Therefore no specific new observations are made regarding the reimannian problem, and the authors are merely adapting the existing framework to "make it work" in riemannian space.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedbacks.
**[On Motivation on Riemannian SAM formulation]**
We believe that considering sharpness-aware minimization in Riemannian optimization could help the optimization. The "flat minima hypothesis" in Euclidean space was put forth, suggesting that neural networks achieve better generalization when they are trained to converge to flatter regions within the loss landscape. In this sense, the Sharpness-Aware Minimization (SAM), which encourages flat minima by (informally) minimizing the Euclidean gradient norm, has been demonstrated to effectively enhance model generalization in large-scale deep learning scenarios such as Vision Transformer and MLP-Mixer. Building on this, due to the fact that an $n$-dimensional manifold locally bears a resemblance to an $n$-dimensional Euclidean space, we expect that the flat minima hypothesis proposed in Euclidean space would hold to some extent for objective functions defined on general manifolds. The Riemannian SAM algorithm can be informally understood as reducing the Riemannian gradient norm $\lVert \mathrm{grad} f(w)\rVert_w$, which reflects the local curvature and geometry of the manifold (hence, the underlying structure of the data) at the point $w$.
In addition, for some optimization problems that can be solved via both Euclidean SAM and Riemannian SAM, the Euclidean algorithm might not adequately take into account the underlying geometry of the manifold.
In order to validate our intuition, we address the challenge of optimizing a neural-net-like objective function defined on the unit sphere manifold $\mathbb{S}^{2}$.
The synthetic dataset is generated by drawing a total of $500$ samples from a standard Gaussian distribution $\mathcal{N}(0, 1^2)$ for $X$ and a uniform distribution $\mathcal{U}(0,1)$ for $y$, resulting in $X\in\mathbb{R}^{500 \times 3}$ and $y \in\mathbb{R}^{500}$. We choose a non-linear regression MSE loss, specifically $f(w)=\frac{1}{2n}\lVert y - \mathrm{ReLU}(Xw)\rVert_2^2$. In order to craft an objective function on the unit sphere, we impose the constraint $\mathcal{C} = \lbrace w\in\mathbb{R}^3: \lVert w\rVert_2=1\rbrace$ on the model parameter $w\in \mathbb{R}^3$.
The Figure 1-(a) in the attached PDF file corresponds to Cartesian coordinates $w=(x,y,z)$ to spherical coordinates $(r,\theta,\varphi)=(1,\theta,\varphi)$, rendering contour plots. In Figure 1-(a) in the attached PDF file, we showcases the converged points on the objective function under the optimization using Riemannian SAM (in purple color) and the conventional Euclidean SAM (in pink color).
Within a maximum iteration budget $100$, we choose hyperparameters for each optimization algorithm. In Figure 1-(a), the purple point (Riemannian SAM) attains a loss value of $0.3800$ while the pink point (conventional Euclidean SAM) converges with a slightly higher loss value of $0.3808$.
Furthermore, in terms of sharpness measures, we considered the following basic two quantities: (i) the trace of the Hessian (sharpness in the context of Euclidean space), and (ii) Manifold-Aware Sharpness, characterized by the Riemannian gradient norm $\lVert\mathrm{grad} f(w)\rVert_w$. Notably, Manifold-Aware Sharpness aligns with Information-Geometric Sharpness [JLP+22] when dealing with statistical manifolds, where the Riemannian metric is defined by the Fisher information. For the aforementioned problem, we compare two metrics and the Figure 1-(b,c) in the attached PDF file depict the results. As seen in Figure 1-(b,c), Riemannian SAM achieves smaller sharpness values than Euclidean SAM, implying convergence toward flatter regions.
In other words, since the Euclidean SAM might fail to properly consider the underlying structure of the manifold even for toy examples, this phenomenon is expected to be exacerbated in extremely high-dimensional problems such as deep learning.
**Reference**
- [JLP+22] A Reparametrization-Invariant Sharpness Measure Based on Information Geometry, NeurIPS 2022.
---
**[On Theoretical Insights]**
Due to the space constraint of rebuttals, we answer this concern for theory in general response. Please refer to “**on theoretical side**” in **1. Riemannian SAM is a non-trivial extension of Euclidean SAM with novel theoretical insights** in general response.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, and I appreciate your time and effort for preparing these results.
I understand that there may be benefits to using SAM in the Riemannian setting, but your paper including your rebuttal did not give me any high-level intuition on why it is important, or how it affects the optimization landscape in a different way comparing to the Euclidean setting (or is identical, that is). I am saying that this paper still lacks insights that will benefit a wider range of audience, which is the most important part of conference papers.
However, I do agree that this framework has a lot of its own benefits, and the authors did a great job in presentations and derivations, and I do believe NeuRIPS would benefit from having this paper, therefore I am keeping my original score.
---
Reply to Comment 1.1.1:
Title: Response to the reviewer
Comment: Thanks for your valuable comments.
As reviewer's suggestion, in the revision, we will clarify our contributions of our paper incorporating the intuition on the Riemannian SAM in terms of both theory and pratice. | Summary: The authors propose an extension of the sharpness-aware minimization (SAM) technique on Riemannian manifolds, for which specific computations are feasible as the retraction map and the vector transport. The proposed optimization method considers the nonlinear geometry that is implied due to the manifold when performing a step. The convergence of the method is theoretically proven under assumptions, while empirical results are included to demonstrate the efficiency of the method.
Strengths: - The extension of SAM to Riemannian manifolds is natural and meaningful.
- Theoretical guarantees for convergence are provided.
Weaknesses: - Even if the paper is in general well-written, there are parts where the presentation is quite high-level and thus becomes unclear. (See questions for specific examples)
- Perhaps some figures could have been used to make the paper more accessible and easy to understand.
- I think that the experimental section implicitly assumes familiarity with the associated techniques, which makes the paper not-self-contained. Perhaps, some details about the settings could have been included in the appendix e.g. the actual models and the corresponding parameter spaces, the retraction maps, the vector transports, etc.
- As the performance of the proposed method seems to be close to the Riemannian Adam technique, I believe that at least some error-bars could have been included to better justify the difference in performance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1. It is not clear in many places if the manifold $\mathcal{M}$ is considered as being embedded within an ambient (Euclidean) space or as (a subset of) $\mathbb{R}^D$ together with a Riemannian metric as in Fisher SAM. In the embedded case, typically the Euclidean gradient is computed, which is then projected orthogonally to the tangent space followed by a retraction map. In the other case, the gradient is the Euclidean gradient multiplied by the inverse of the metric, while for the update step, simple addition is used.
Q2. Related to Q1. The steps in lines 146-147 are quite confusing. Is it implied here that the actual manifold on which $\mathcal{L}$ is defined is actually a surface embedded within an ambient Riemannian manifold where the metric $g(w)$ defines the structure? If this is the case, then the "orthogonal" projection on the tangent space should be defined with respect to the ambient Riemannian metric?
Q3. Related to Q1 & Q2. In Eq 15, could you explain based on Q1 and Q2 why we need to use both the inverse of the metric and the projection to compute the gradient?
Q4. Is the Stiefel manifold used somewhere as it is presented in line 130?
Q5. In Eq. 12 it is mentioned that higher-order Riemannian gradient is necessary. Could you elaborate on which computation is infeasible? Due to the chain-rule the derivative of the retraction map with respect to the base point $w$?
Minor:
- Line 202-203: Something is missing at the end of the sentence.
- Line 210: Probably $\eta \in \mathcal{T}_w\mathcal{M}$ .
- Eq 15: probably the $\nabla$ is missing.
- Line 164: Capital S at the begging of the sentence.
- Line 122: Is the function $\phi$ an isometry?
- Line 101: Only isometric vector transport implies that the angle between tangent vectors may change? Any influence on the optimization?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss some limitations of their work, such as the computational cost, while the standard limitation that comes with such geometric approaches is the access to the associated manifold operations e.g. retraction, vector transport, etc. As a theoretical work, it does not have any direct negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedbacks.
**[On Q1 and Q2]**
As the reviewer pointed out, in most practical cases, the manifolds under consideration are often embedded in Euclidean space or are subsets of $\mathbb{R}^d$ with an appropriate Riemannian metric. For instance, in the case of one of an elementary manifold, the $n$-dimensional hypersphere $\mathbb{S}^{n-1}$, it is embedded submanifold of Euclidean space. Thus, the inner product defined on the tangent space $T_x \mathbb{S}^{n-1}$ for any point on the hypersphere $x\in \mathbb{S}^{n-1}$ is simply the Euclidean inner product (i.e., $\langle u, v\rangle_x = u^\mathsf{T} v$ for any $x$ and any tangent vectors $u,v\in T_x \mathbb{S}^{n-1}$). As the reviewer mentioned, the computations of gradients on the hypersphere involve calculating the Euclidean gradient first and then projecting it onto the tangent space. Furthermore, in deep learning on manifolds, prevalent manifold structures include the Poincaré ball and Lorentz model on the hyperbolic space, along with the Stiefel manifold that encourages orthogonality among parameters. All three of these manifolds are defined on subsets of $\mathbb{R}^d$ with suitable Riemannian metrics. In this case, the computation of the Riemannian gradient, as the reviewer described, involves preconditioning the Euclidean gradient with the inverse metric.
---
**[On Q3]**
Indeed, the Lorentz model is not a Riemannian manifold but rather a (semi)-Riemannian manifold. Nevertheless, Lorentz model remains amenable to Riemannian optimization. In consequence, due to the absence of a guarantee that the Euclidean gradient preconditioned with the inverse metric (i.e., the Riemannian gradient) resides on the tangent space, the projection operation onto the tangent space becomes necessary. The equation (8) in our paper illustrates this projection procedure.
---
**[On Q4]**
While the Stiefel manifold was not considered in our experiments in this paper, it is mainly employed to encourage ***parameter orthogonality***. The several studies have shown that parameter orthogonality influences model generalization and serves as a remedy for the vanishing/exploding gradient problem. We include some references below.
**Reference**
- [LJW+21] Orthogonal Deep Neural Networks, TPAMI 2021.
- [TK21] Orthogonalizing Convolutional Layers with the Cayley Transform, ICLR 2021 Spotlight.
- [FT21] Efficient Riemannian Optimization on the Stiefel Manifold via the Cayley Transform, ICLR 2020.
- [WCC20] Orthogonal Convoluiontal Neural Networks, CVPR 2020.
---
**[On Q5]**
Yes, you are right. The retraction already involves the model parameter $w$, hence we should apply chain rule with respect to $w$, which requires the (infeasible) higher-order Riemannian gradient in practice.
---
**[On Minor Points]**
Thanks for the correction of typos in the paper. In line 122, $\varphi$ is known to be isometric. In another context, the isometricity of the vector transport exhibits the characteristics of symmetry/angle preservation and also the preservation of vector lengths. These attributes facilitate the theoretical analysis of Riemannian optimization and the isometricity of vector transports stands as one of the standard assumptions in Riemannian optimization analysis.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal
Comment: I would like to thank the authors for their replies. After considering the other reviews, I tend to agree that the current paper is simply the Riemannian version of SAM, apart from the technical differences. However, I think that this is a fair contribution and for this reason, I will keep my score. I suggest the authors taking into account the reviews and update the paper accordingly. | Summary: This paper introduce the new objective function SAM (Shapeness-awared minimization) on optimization problems on Riemannian manifolds. The motivation is that, with the success of SAM on the Euclidean space, the new Riemmanian metric, which is typically different from the flat Euclidean metric, can introduce more domain prior knowledge to SAM. Example applications include social network analysis and knowledge graph completion. The paper derived a new objective function based on retraction operations on manifolds, and provide convergence analysis of the Riemannian SGD algorithm. Empirical results show slight improvement over baseselines that are not based on the SAM objective function.
Strengths: The overall motivations make sense and the paper has made the contribution of combining SAM and optimization on Riemannian manifolds.
Weaknesses: The main weaknesses are as follow:
1) The derivations of the new objective relies on standard tools from optimization on manifold and there is limited technical contribution.
2) Though SAM was published and proved useful, it is still not clear why minimizing a normalized norm of Riemannian gradient helps improve the test accuracy (line 190). The statement on lines 59-60 is not well supported. It is understandable that the main focus of this submission is not on renovating SAM, but some informal explanation can be helpful.
3) Only the best test accuracies are reported. The authors are encouraged to conduct cross-valiation and/or report average performance based on randomized experiments.
4) In my opinion, the improvement of the test accuracies is visible but not quite significant.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: line 101: should <\xi,\eta> be evaluated on the tangent space at z rather than w?
line 112: the angle is not formally defined.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedbacks.
**[On Technical Limitations]**
We appreciate your valuable feedbacks, but we do not agree with the reviewer’s opinion. Due to space constraint of rebuttals, we answer this concern in **1. Riemannian SAM is a non-trivial extension of Euclidean SAM with novel theoretical insights** in general response.
---
**[On Intuition of Riemannian SAM]**
The "flat minima hypothesis" in Euclidean space was put forth, suggesting that neural networks achieve better generalization when they are trained to converge to flatter regions within the loss landscape. In this sense, the Sharpness-Aware Minimization (SAM), which encourages flat minima by (informally) minimizing the Euclidean gradient norm, has been demonstrated to effectively enhance model generalization in large-scale deep learning scenarios such as Vision Transformer and MLP-Mixer.
Building on this, due to the fact that an $n$-dimensional manifold locally bears a resemblance to an $n$-dimensional Euclidean space, we expect that the flat minima hypothesis proposed in Euclidean space would hold to some extent for objective functions defined on general manifolds. The Riemannian SAM algorithm can be informally understood as reducing the Riemannian gradient norm $\lVert \mathrm{grad} f(w)\rVert_w$, which reflects the local curvature and geometry of the manifold (hence, the underlying structure of the data) at the point $w$.
In order to validate our intuition, we address the challenge of optimizing a neural-net-like objective function defined on the unit sphere manifold $\mathbb{S}^{2}$.
The synthetic dataset is generated by drawing a total of $500$ samples from a standard Gaussian distribution $\mathcal{N}(0, 1^2)$ for $X$ and and a uniform distribution $\mathcal{U}(0,1)$ for $y$ respectively, resulting in $X\in\mathbb{R}^{500 \times 3}$ and $y\in\mathbb{R}^{500}$. We choose a non-linear regression MSE loss, specifically $f(w)=\frac{1}{2n}\Vert y - \mathrm{ReLU}(Xw) \Vert_2^2$. In order to craft an objective function on the unit sphere, we impose the constraint $\mathcal{C} = \lbrace w\in\mathbb{R}^3: \lVert w \rVert_2=1 \rbrace$ on the model parameter $w \in \mathbb{R}^3$.
The Figure 1-(a) in the attached pdf file corresponds to Cartesian coordinates $w=(x,y,z)$ to spherical coordinates $(r,\theta,\varphi)=(1,\theta,\varphi)$, rendering contour plots. In Figure 1-(a), we showcases the converged points on the objective function under the optimization using Riemannian SAM (in purple color) and the conventional Euclidean SAM (in pink color).
Within a maximum iteration budget $100$, we search best hyperparameters for each optimization algorithm. In Figure 1-(a), the purple point (Riemannian SAM) attains a loss value of $0.3800$ while the pink point (conventional Euclidean SAM) converges with a slightly higher loss value of $0.3808$.
Furthermore, in terms of sharpness measures, we considered the following basic two quantities: (i) the trace of the Hessian (sharpness in the context of Euclidean space), and (ii) Manifold-Aware Sharpness, characterized by the Riemannian gradient norm $\lVert \mathrm{grad} f(w)\rVert_w$. Notably, Manifold-Aware Sharpness aligns with Information-Geometric Sharpness [JLP+22] when dealing with statistical manifolds, where the Riemannian metric is defined by the Fisher information. For the aforementioned problem, we compare two metrics and Figure 1-(b,c) depict the results. In both metrics, Riemannian SAM achieves smaller sharpness values than Euclidean SAM, implying convergence toward flatter regions.
In other words, since the Euclidean SAM might fail to properly consider the underlying structure of the manifold even for toy examples, this phenomenon is expected to be exacerbated in extremely high-dimensional problems such as deep learning.
**Reference**
- [JLP+22] A Reparametrization-Invariant Sharpness Measure Based on Information Geometry, NeurIPS 2022.
---
**[On Experiments]**
Contrary to the reviewer’s belief, we believe that our improvement is significant compared to baselines. For detailed response, due to space constraint of rebuttals, we answer the concerns in “**2. On Experiments”** in general response and include the tables for the additional experiments in the attached PDF file. | Summary: This paper extends the Sharpness-Aware Minimization (SAM) algorithm to Riemannian manifolds. They show that the new model subsumes Fisher SAM as a special case, and it leads to a new algorithm (called Lorentz SAM when specified to the Lorentz manifold). The main proposed algorithm is given in Algorithm 1, which works for general Riemannian manifolds, and a convergence theorem is provided for it (Theorem 1). The paper is concluded with some experiments showing the gain of the new algorithm in various settings.
Strengths:
- novel formulation allowing to extend SAM to Riemannian manifolds (Algorithm 1)
- theoretically verified model (Theorem 1)
- various experiments
Weaknesses: - contributions of this paper are unfortunately limited to a few examples
- the proposed algorithm mimics the original SAM after making necessary changes
- Algorithm 1 is not well explained, though it's the main contribution of the paper
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
An interesting result on optimization on manifolds; but I'm concerned about the sufficiency of the contributions of the paper. Besides that, the method is not well explained and the paper could've been better written.
Questions/Comments:
- Line 117: "Lorentz model is a Riemannian manifold" This is wrong since the Lorentz manifold is a pseudo-Riemannian manifold, as the metric is not positive-definite. I'm wondering how this can change the algorithm's proof and applicability to the Lorentz manifold.
- Line 145-147: This is only true when there is a nice extension of the function on the Euclidean space, and so might not be always feasible.
- Algorithm 1: the role of the base optimizer is not well explained.
- Line 8: what is the role of the transportation? Why can't one update the parameters after the ascent step (then one does not need transportation?)? This part is also not well explained
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedbacks.
**[On Contributions]**
Thank you for your important feedbacks, but we believe that our main example, hyperbolic representation learning, is not a simple “one example” but has a sufficiently general use cases, and that proposing an optimization technique that can advance it can be a significant contribution to the community.
Deep learning in conventional Euclidean space often falls short in effectively capturing hierarchical relationships, efficiently representing tree-like structures, addressing dimensionality challenges (ex. embeddings), and etc. Under such limitations, one of main approach in geometric deep learning, ***hyperbolic representation learning***, has gained significant importance in various fields due to its ability to capture such relations in various data, for example, images, languages, graphs, and so on. The hyperbolic representation learning has been able to address important challenges in Euclidean space more intelligently by taking into account the underlying geometric characteristics of the data, thereby enabling us to devise significantly advanced techniques. We introduce several previous studies.
While it holds true that the majority of modern deep learning architectures and techniques continue to be proposed in Euclidean space, the extension of successful models and foundational approaches from Euclidean to (Riemannian) manifolds, irrespective of the domain, is gaining attention. Moreover, (Riemannian) manifold extensions that consider the underlying data structure often outperform their Euclidean counterparts. For instance, in computer vision fields, the methodologies extended to hyperbolic spaces [VLB+19, KMU+20, GMK23, SKW+23, SBM23] have been actively studied for the important problem including image embedding, classification, segmentation, and etc.
Also, the well-regarded generative models in Euclidean space such as normalizing flow and diffusion models have successfully been extended to Riemannian Manifolds [LFC21, MN20, HAB+22]. Notably, even in language modeling, Transformer which is a main backbone, has been extended with fully hyperbolic modules [CHL+22], and hyperbolic Transformer outperforms the Euclidean counterpart. Moreover, in this year, the pioneering research [CCB+23] that incorporated hyperbolic geometry into deep reinforcement learning has been published in ICLR as a spotlight. More recently, avenues for enhancing numerical stability in hyperbolic representation learning have been published [MWW+23] in ICML 2023.
In this sense, we do not perceive our contribution to be limited solely to a few examples. Rather, we believe that our optimization approach has a great potential for broader applicability across various domains on manifold-aware deep learning.
**References**
- [VLB+19] Manifold Mixup: Better representations by Interpolating Hidden States, ICML 2019
- [KMU+20] Hyperbolic Image Embeddings, CVPR 2020
- [ELM+19] Continuous hierarchical representations with poincare ́ variational auto-encoders, NeurIPS 2019
- [LFC21] Hyperbolic Generative Adversarial Networks, IEEE Access 2021
- [MN20] Riemannian Continuous Normalizing Flows, NeurIPS 2020
- [HAB+22] Riemannian Diffusion Models, NeurIPS 2022
- [GSA+22] Hyperbolic Image Segmentation, CVPR 2022
- [CHL+22] Fully Hyperbolic Neural Networks, ACL 2022
- [GMK23] Hyperbolic Contrastive Learning for Visual Representations beyond Objects, CVPR 2023
- [SKW+23] Robust Hierarchical Symbolic Explanations in Hyperbolic Space for Image Classification, CVPR 2023
- [CCB+23] Hyperbolic Deep Reinforcement Learning, ICLR 2023
- [MWW+23] The Numerical Stability of Hyperbolic Representation Learning, ICML 2023
- [SBM23] Poincare ResNet, ArXiv 2023
---
**[On Riemannian SAM formulation]**
Due to space constraint of rebuttal, we answer the reviewer’s concern in **1. Riemannian SAM is a non-trivial extension of Euclidean SAM with novel theoretical insights** in general response.
---
**[On Algorithm 1]**
In Algorithm 1, the term “base optimizer $\mathcal{A}$” refers to any optimizer that can be employed on a manifold, such as Riemannian SGD or Riemannian Adam. Regarding line 9 in Algorithm 1, $\Delta_t^{adv}$ signifies the final update vector constructed using A with Riemannian SAM gradient. In other words, the Riemannian SAM gradient $g_t^{adv}$ (line 8 in Algorithm 1) can be utilized to construct the first-order/second-order momentum of Riemannian Adam. Or alternatively, it can be directly employed for performing Riemannian SGD. We will clarify it.
---
**[On Lorentz Model]**
This is a mistake in our statement, and we apologize for the confusion. As pointed out by the reviewer, while it is true that the Lorentz model involves a (semi)-Riemannian manifold, we can still employ the Riemannian optimization introduced in our paper to optimize the objective function defined on this manifold. In light of this context, we will provide a clearer description in the revision.
---
**[On Role of Vector Transport in Line 8]**
In Algorithm 1 and according to our Riemannian SAM formulation, the Riemannian gradient computed at $w=w_t^{adv}$ is not a vector on the tangent space $T_{w_t} \mathcal{M}$ defined at the point $w=w_t$, making the algebraic operations impossible for directly updating $w_t$. Consequently, in order to update the original parameter $w_t$ using the Riemannian gradient computed at the perturbed point, an operation is required to bring it into the tangent space at $w_t$, which can be achieved via the vector transport $\mathcal{T}_{w_t^{adv}}^{w_t}$ at line 8 in Algorithm 1.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I acknowledge the response provided by the authors. I still suggest revising the paper to clarify the contributions. Please also apply my comment on the Lorentz model to the revised version. Given the authors' efforts and after reading the responses and the general comment, I decided to slightly increase my score (from 3 to 4). Thanks!
---
Reply to Comment 1.1.1:
Title: Thank you for your comments and increasing the score. We are wondering if there is any further suggestions.
Comment: We sincerely appreciate your comments as they have strengthened our paper and also thank the reviewer for increasing the score.
In the revised version, we assure that we will further clarify the contributions of our paper and incorporate your comments regarding the Lorentz model.
As the reviewer mentioned that our paper still needs to be revised, we are wondering if there might be any further suggestions you could provide on potential avenues for improving our paper. | Rebuttal 1:
Rebuttal: Due to the space constraints on each rebuttal, we answer some important questions in general response here.
**1. Riemannian SAM is a non-trivial extension of Euclidean SAM with novel theoretical insights**
* **[On technical side]**
We would like to emphasize that our Riemannian SAM does NOT merely mimic the original SAM either theoretically or methodologically. In regarding methodology, there could be several extensions of conventional Euclidean SAM to a Riemannian optimization in different manners.
For example, it is most natural to choose a perturbation region at the current point $w_t$ as in the conventional Euclidean SAM, $\delta \in B_\rho(w_t) = \lbrace x \in \mathcal{M}: d_\mathcal{M}(w_t,x) \leq \rho \rbrace$ where $d_\mathcal{M}$ represents the distance on the manifold. However, adopting the constraint on $\delta$ in this manner may pose challenges in utilizing the standard assumptions for analyzing non-convex Riemannian optimization, such as geodesic or retraction smoothness (see condition (C-4) in our paper), which makes it difficult to guarantee convergence. Moreover, the computation of $d_\mathcal{M}$ is often computationally inefficient in practice. Another possible extension is to apply the vector transport operation from Equation 8 of Algorithm 1 to Equation 9. The following outlines the modified procedure. (i) $g_t^{adv} = \mathcal{A}(grad \mathcal{L}(w; \mathcal{S})\lvert_{w=w_t^{adv}})$ and (ii) $\Delta_t^{adv} = \mathcal{T}_{w_t^{adv}}^{w_t} g_t^{adv}$.
For base optimizer $\mathcal{A}$, any optimization algorithm commonly used in Riemannian optimization can be adopted (e.g., Riemannian SGD, Riemannian momentum, etc.). However, when vector transport is applied after constructing $g_t^{adv}$ via the momentum-based optimizer $\mathcal{A}$, the momentum construction takes place on the tangent space $T_{w_t^{adv}} \mathcal{M}$ at the perturbed point $w_t^{adv}$, while the parameter update occurs on the different tangent space $T_{w_t} \mathcal{M}$ at the point $w_t$. As a result, this might introduce another challenges in understanding and analyzing the overall optimization process.
- In this perspective, various alternative extensions are also possible, but among them, we have carefully designed a ***theoretically valid, computationally practical, and non-trivially extended Sharpness-Aware Minimization to a manifold for Riemannian optimization.*** Then, we have successfully demonstrated both convergence analysis and empirical studies to corroborate our Riemannian SAM.
* **[On theoretical side]**
In terms of theory, our key observation of Theorem 1 in the paper lies in ***the alignment between the Riemannian SAM gradient $\mathcal{T}_{w_t^{adv}}^{w_t} \mathrm{grad} f(w_t^{adv})$ and the Riemannian gradient $\mathrm{grad} f(w_t)$*** for the perturbation point, $w_t^{adv} = R_{w_t}(\rho_t \mathrm{grad}f(w_t))$. The previous study [50] on Euclidean SAM says that Euclidean SAM gradient should be well-aligned with the true gradient step for convergence. Unlike the theoretical claim in [50], we stress that for convergence guarantee those gradients should be well-aligned within the preconditioned space (by inverse Riemannian metric) regardless of alignment in Euclidean space.
To verify this insight, we directly measure the angles between two vectors with a 2D toy example, illustrating how they align in practice. Toward this, we consider two angles: (i) $\angle(\nabla f(w_t^{adv}), \nabla f(w_t))$ (Euclidean Alignment) and (ii) $\angle (\mathcal{T}_{w_t^{adv}}^{w_t} \mathrm{grad} f(w_t^{adv}), \mathrm{grad} f(w_t))$ (Riemannian Alignment, Ours)
In this example, we consider the logistic regression where $200$ data samples are generated with $100$ of them sampled from $\mathcal{N}(-1, 1^2)$ and the remaining $100$ sampled from $\mathcal{N}(1, 1^2)$. The labels are assigned such that if a sample was drawn from a Gaussian distribution with a mean of $-1$, the label was set to $y=0$, and otherwise, we set $y=1$. We minimize the cross-entropy loss with our Riemannian SAM with the Fisher information matrix as the Riemannian metric.
The Figure 2 in the attached PDF file depicts the comparison of angles. The loss decreases up to 10-th iteration, after which it remains around the converged point. As evident from the illustration, while the angles between the Euclidean space SAM gradient and the gradient deviate by up to around 25 degrees, the angles between the preconditioned SAM gradient and the preconditioned gradient, influenced by the Fisher information, align more closely with deviations only up to a maximum of 10 degrees. In high-dimensions, we expect that the angles would become significantly larger, corroborating our theoretical insight.
**2. On Experiments**
We did not select the best model based on the test data; instead, we opt for the best model based on the validation loss. Some baselines have reported their best test performance, which might have led reviewers to misunderstand our results. In fact, we report the average performance for knowledge graph completion and choose hyperparameters based on validation. Thus, our evaluation is comparatively more stringent when contrasted with those previous studies, placing us at a potential disadvantage in terms of evaluation. Also, as the reviewer's suggestion, we conduct additional experiments by varying seeds. Please refer to Table 1 in the attached PDF file.
During the rebuttal period, we conduct additional experiments with expanded size of Transformer on machine translation tasks. Given this large-scale experiments, it was hard to run randomized simulations for the baselines as well, due to rebuttal time constraints. As a result, we report test BLEU scores by selecting the model *based on validation loss*. Please refer to Table 2 in the attached PDF file.
According to Table 1 and 2 in the attached PDF file, we emphasize that our results are NOT marginal for all experiments considered in our paper.
Pdf: /pdf/b037dc1f66372ef2269d1ea80e6319363d596d58.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
The Simplicity Bias in Multi-Task RNNs: Shared Attractors, Reuse of Dynamics, and Geometric Representation | Accept (poster) | Summary: This paper studies how recurrent neural networks form shared attractors and reuse dynamics in the multitask setting. A simplicity bias is revealed, i.e., RNNs will not create new attractors unless necessary. The authors further investigate how task similarities (symmetry, gradients) can be translated to representation similarities. Finally, they discuss how their results are related to continual learning and modularity.
Strengths: This paper poses an interesting scientific question on the formation of attractors in recurrent neural networks in the multi-task setting. The simplicity bias, to the best of knowledge, is novel and investigated in details. The experiments are nicely controlled, and evidence are clear and convincing. The paper is in general well-written. Figure 1 & 2 are pleasure to read (although Figure 3-5 need improvement, see weaknesses).
Weaknesses: The scope of the paper is a bit limited: it only did experiments on toy synthetic datasets. Although this is probably fine for a scientific paper when one tries to understand something deeply, I feel that the science part hasn't been pushed to the limit:
(1) How do you expect your conclusion scale to more complicated datasets (e.g., contains 1000 tasks)?
(2) Can your analysis generalize to auto-regressive models (e.g., transformer-like language models)?
I feel that the impact of this paper on engineering needs more discussion.
(1) Is the simplicity bias a good or bad thing in practice?
(2) Once we understand the simplicity bias, what can we do to improve current RNNs?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Line 175, W_rec -> W_{rec}
* Figure 3, lack of axes make the plots hard to read. Figure 3D, I don't understand the difference between the 2nd and the 3rd subplot.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Our work stems from neuroscience questions, but we thank the reviewer for causing us to speculate on possible engineering applications.
Q1: How do you expect your conclusions to scale to more complicated datasets (e.g., contains 1000 tasks)?
The concept of convergence to shared dynamical objects, and that new ones are formed only if the error does not decrease should apply there as well. We expect a more complex structure to emerge, perhaps hierarchical, governed by dynamical requirements.
Q2: Can your analysis generalize to auto-regressive models (e.g., transformer-like language models)?
The notion of dynamical objects is, to the best of our knowledge, not usually employed in transformer models. While intriguing, we feel that we cannot make educated speculations in this domain at this stage.
Q3: Is the simplicity bias a good or bad thing in practice?
The simplicity bias in neural networks is probably a double-edged sword. On one hand, it allows models to efficiently generalize across tasks that share underlying structures, reducing the computational overhead and potentially speeding up training. This can be especially beneficial when data is limited, or tasks are closely related. On the other hand, an overemphasis on simplicity could lead to suboptimal performance on distinct or more complex tasks, as the model might overly regularize or not capture unique nuances of specific tasks. Additionally, it could be less optimal to remove all information about the task in more realistic and complex settings, where the task requirements themselves change over time.
One of our future directions is finding a set of tasks that afford two qualitatively different solutions, in order to compare their relative strengths and weaknesses in terms of generalization, performance, information-efficiency, robustness to noise, and flexibility.
Q4: Once we understand the simplicity bias, what can we do to improve current RNNs?
This depends on how we want them to generalize. Choosing the order in which tasks are trained can govern which dynamical objects are formed first, and serve as a scaffold for the following tasks. This, in turn, can shape representations, and the way in which networks will extrapolate.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply! I'll keep my original assessment. | Summary: While the relationship between task requirements and neural dynamics in Recurrent Neural Networks (RNNs) has been studied for individual tasks, the dynamics of multiple tasks working together remain largely unexplored. The study introduces a systematic framework to examine multiple tasks in RNNs, minimizing interference from input and output correlations. The findings reveal that RNNs tend to share attractors and reuse dynamics, which is referred to as the "simplicity bias." It is observed that RNNs develop attractors sequentially during training, prioritizing the reuse of existing dynamics and opting for simple solutions whenever possible. Concrete examples demonstrate that new attractors primarily emerge due to task demands or architectural constraints, representing a balance between the simplicity bias and external factors. The geometry of joint representations along attractors is examined, and it is shown that task representations align based on their input strength, resulting in correlated projections. The research suggests potential applications, such as using the geometry of shared attractors to infer unknown tasks. Moreover, the simplicity bias implies that network modularity may not emerge spontaneously in RNNs without specific incentives, providing insights into the conditions necessary for network specialization.
Strengths: These suggestions are intended to improve the overall clarity and accessibility of the research.
The paper provides the logical progression in the field from studying single tasks to exploring multiple tasks. This shift is considered more in line with the environmental conditions animals face, which often exhibit symmetries, regularities, and structures.
The study standardized the input and output structure across all tasks and used a consistent network architecture. This helped attribute differences in representations to intrinsic task complexities rather than variations in structure or design. Individual input and output channels were assigned for each task to prevent artificial convergence in representations. This ensured that any overlap or similarity in representations reflected shared computational requirements or reused dynamics across tasks.
The simplicity bias results is very interesting and I would like to encourage the authors to further develop and refine this intriguing finding.
The manuscript is overall clear and the relevant code was shared with the submission, permitting transparency of the results.
Weaknesses: These suggestions are intended to improve the overall clarity and accessibility of the research.
The paper lacks a clear outline of its limitations, contributions, and literature reviews, and would benefit from providing more references to position the work within the context of prior research. It is also important to include information about the computational cost associated with the proposed approach.
The limitations outlined below highlight some of the weaknesses of the paper.
Regarding specific details:
* The legend for Figure 3B is incomplete and does not provide essential information. Additionally, the functioning of the axes in plot 3B is unclear.
* Figure 3C is not referenced or mentioned in the text, creating a discrepancy.
* The paper does not make any reference to the supplementary material, which should be included.
* Figure 5A is not properly referenced within the text, which may lead to confusion.
* The definition of 5D is missing, and its context should be clarified.
* Some typos and unclear sentences have been identified, which could benefit from clarification and proofreading.
* The readability of Figure 3D is poor, and it would be advantageous to increase its size to improve understanding.
To enhance the quality and readability of the paper, it is crucial to address these issues and make the necessary revisions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Dear authors, I would greatly appreciate it if you could respond to the following questions, as they would help improve my understanding of your work:
* Why are the representations learned in the parallel setting not displayed in the first figure?
* I am confused about how the two tasks are shown in the first setting, particularly with regards to "\0". My understanding was that in this setting, the network was trained on only one task. Could you please clarify?
* Line 277 suggests that the observed differences can be largely attributed to these architectures. I am unsure about the meaning of this statement. Could you elaborate further?
* Based on the findings, what conclusions can be drawn about neuroscience? How does this work contribute to our understanding of neural processes?
* Could you provide information about the color of the triangle in the figures?
Additionally, I find the results regarding the simplicity bias very interesting. I am curious to know if there could be a connection between this bias and the observed low rank bias in feedforward networks, where eigenvalues are learned sequentially when initialised with small random weights.
Lastly, I would like to know if you investigated the impact of different initializations in your experiments.
Addressing these concerns would greatly influence my evaluation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: To enhance the paper, it would be valuable to provide a clear outline of the limitations of the work. Some potential limitations to consider include:
* Lack of biological validation: While the paper proposes mechanisms and patterns observed in RNNs, it may lack direct biological validation. It would be beneficial to explore experimental evidence or connect the findings to known neurophysiological phenomena to strengthen the biological plausibility of the conclusions.
* The other parameters of the network that could be looked at, I understand that for visualisation pursues and summarising you look at subset of the information that could be looked at in this network.
* Scope of external factors: The paper acknowledges the influence of external factors on the emergence and dynamics of multiple tasks but may not extensively explore the full range of potential external factors. Further investigations into the interplay between these factors and the simplicity bias in RNNs could provide a more comprehensive understanding. For example how initialisation changes the observed dynamics.
Clearly outlining these limitations would contribute to the transparency and credibility of the paper, encouraging further research and discussion in the field.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer's thorough examination of our work and the invaluable feedback provided. We acknowledge the areas of improvement, especially in references and technical accuracies, and commit to rectifying them diligently.
Computational cost:
The networks highlighted in this submission are compact (used N=100 hidden units), complemented by a modest-sized training dataset (512 trials x 200 steps). As a result, provided the training converges, the computational overhead remains minimal. For instance, a representative Vanilla network might require approximately 180 seconds for training — though this duration can fluctuate based on factors such as learning rate, batch size, and the set loss threshold — and utilizes around 16.2 MB of memory. In contrast, a comparable GRU network typically completes its training in about 110 seconds, maintaining a similar memory footprint
Parallel set-up missing in fig 2:
Overall, the more constrained the regime is, the harder it is to train. When all of the tasks are forced to be produced in parallel, and the task is not as “easy” as N-bit FF, Vanilla networks did not converge during training. They did converge on the orthogonal regime, which is more constrained than the don’t-care, but less than the parallel. This is why to exemplify the hybrids of the dynamical objects we focused on the first two regimes.
The training regime of O:
In this regime the network is trained on all of the tasks. The difference is, that when the network is tested on task 1, it is not “penalized” by the loss function on whatever it produced on channel 2. So each trial corresponds to only one task, but the training set consists of all of the tasks.
Relevance to Neuroscience:
One interesting idea that our work provides is that shared representation, or abstraction of the task in the neural dynamics, can be a result of laziness rather than sophistication. This is because in our experiments networks always developed a shared attractor before splitting it to multiple ones, and it could be that the nervous system uses the simplest representations when possible. Another relation is the order in which tasks are learned, that can affect the final representation achieved. The first task will determine the attractor that will serve as a scaffold for the following ones.
Line 277:
One major difference between our results and the results obtained in previous results are the input-output statistics and input architecture we used to train the network. We elaborate in the response to reviewer ZQ4N. Overall, since different papers have different approaches for how the information flows from input the output, such choices greatly affect the joint dynamics of the networks.
Colors of shapes in figure 4:
We edited the figure- it appears on the attached pdf.
Connection between this bias and the observed low rank bias in feedforward networks:
As mentioned in the discussion, there is a common mechanism in gradient descent dynamics - different structures evolve at different speeds. Because loss can be reduced in various ways, the fastest structure “wins” and the slower ones do not emerge. This is responsible for low-rank perturbations to connectivity in recurrent networks (Schuessler et al 2020), and is analogous to the neural race reduction (Saxe et al), and to our work.
Different Initializations:
Throughout our experiments, we trained networks using both the "Lazy" and "Rich" output regimes.
For the Lazy regime, we initialized weights within the range U([-1/sqrt(N), 1/sqrt(N)]).
For the Rich regime, weights were initialized within U([-1/N, 1/N]).
In the tasks we explored, no significant differences were observed between the two initialization methods. Included in the attached PDF is a figure, showcasing results from networks trained under the "Rich" regime as a point of comparison to the "Lazy" approach.
Definition and context of figure “5D”:
We appreciate your attention to the detail. The reference was meant for Figure 5C-bottom. Our aim is to demonstrate that the geometry of tasks on the manifold is dictated by the relative 'spacing' or normalized derivative of the associated function. While earlier sections established that tasks with similar derivatives are proximate, this section elucidates why: tasks with analogous normalized derivatives align more closely in phase space due to consistent point-to-point proportions. Figure 5C-top visualizes this concept. In Figure 5C-bottom, we project each task to fit a regressor to the outputs of the other tasks. Tasks that are more similar in terms of their derivatives and spatial proximity exhibit similar structures, hence facilitating regression of their outputs.
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for your careful explanation and detailed rebuttal. I increased my score as a reflection of improvement in clarity and presentation as well as the general understanding of the contribution. I really appreciate the neuroscience focus of this paper. In relation to neuroscience, it's worth noting that the inductive bias arising from the brain's connectomics/ network architecture could also potentially influence the attainable learnable attractors and representations. | Summary: This paper uses a multi-task RNN setup and tries to make sense of what computations get shared, and how they get shared.
Strengths: A novel task set-up is used with gated, orthogonal, and parallel settings. The overall problem trying to be tackled is interesting.
Weaknesses: 1) This paper is very confusingly presented. Section 2.2. is confusing. All the figures are extremely hard to parse and need much better annotation/labelling. I couldn’t make heads or tails what figure 3 was trying to say. Figure 4 suddenly has some triangles, pentagons, and diamonds. And they are different colours. Not explained why.
2) A main claim is that an understanding of a ‘simplicity bias’ is provided, but it’s not clear what understanding is provided beyond Yang et al 2017, or Driscoll et al 2022. Overall, it’s not clear how this paper furthers our understanding beyond those papers.
3) Only one network structure was analysed, while Yang, Driscoll, and others, show interesting phenomena under different architectural constraints.
4) No reference is given to Saxe et al., 2022 which shows that shared representations are favoured as they are the ones which ‘see more data’ and so are learned fastest, thus explaining much of the phenomena from Yang, Driscoll, and your paper.
5) (Reference [5] and [6] are the same paper.)
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: See Weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Not clear that this clarifies anything over and above Yang or Driscoll. The paper is really hard to make sense of. Not because it is technically difficult, but because of its presentation. In particular, the figures are so under labelled that they may as well not be there.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: As written in the general response, we sincerely apologize for the lack of clarity. We hope the new figures 3 and 4 convey our first steps towards resolving this issue, and we will make this a high priority for the final manuscript.
Relation to Yang & Driscoll. We indeed drew inspiration from these studies, but our approach extends and diverges from them in several respects.
Input-output structure: The task set used by Yang et al (and Driscoll et al) was inspired by tasks used in neuroscience experiments. Both works cleverly utilized the resulting networks to ask questions about shared representations and dynamical objects - that partially overlap with our work.
The neuroscience tasks exhibit correlations between input-output statistics and cognitive demands. For instance:
Input statistics: Working-memory tasks typically involve two stimuli separated by a delay, while integration tasks use continuous stimuli.
Output statistics: Decision-making tasks generally involve a binary outcome, whereas the classic GO task necessitates input stimulus replication.
Input-output architecture:
Yang's study was potentially constrained by using a shared output channel. Our methodology avoided this pitfall by ensuring outputs were distinct across tasks, so that the representations are not forced to be aligned w.r.t. the output of the network. Otherwise, tasks are forced to be shared or separated depending on whether they require similar or different outputs. In section 4 of the supplementary material, we demonstrate a spectrum of input-output architectures and how each of them constraints the joint representation in various ways.
First-Principle Approach with Basic Dynamical Objects:
A foundational aspect of our work is the first-principle approach to understand the dynamics of RNNs. Our rationale was that in order to understand dynamical reuse of objects in complex tasks, we should first focus on primary dynamical objects such as fixed points, limit cycles, and line and plane attractors. It is unclear a priori whether two topologically distinct dynamical objects will merge in a multi-task set up. We showed that networks tend to merge these primary dynamical objects into singular representations, suggesting that this simplicity bias might be a general principle extending across various object types.
By supplying three different task regimes, where any solution of the last two is also a solution to the first, we show that tasks share objects when possible, but when constrained develop separate solutions. Hence, representation sharing is a bias but not definite, and we show that task regime is one way to force separation. Another way to sculpt the joint representation is via the input-output architecture, as discussed in Supp. 4.
Nuanced Insights into Learning Dynamics:
We show that the number of dynamical objects is correlated with the number of unstable eigenvalues of the recurrent matrix. We show how learning, in our regime, starts with no unstable modes and the origin as the only attractor. Both the number of dynamical objects and the number of unstable values commonly increase with training, and that networks start with a shared dynamics solution before developing additional unstable directions.
So, we provide an argument for the preference for shared representation.
Dive into Representational Geometry:
We ask another set of questions about the geometry of the shared representations, and check which task similarity correlates with similarity in representation. To do so, we focus on a set of tasks that all require line attractor, and only differ in the mapping between input and output.
Reference to Saxe et al. 2022:
This reference is indeed pertinent, and it was an oversight on our part not to include it in line 281. Our findings align with the 'race' concept presented in that study, consistent with the other works referenced in that section. However, our experimental setup differs: we employ recurrent networks, leading to a race in the formation of dynamical objects, rather than connections between modules as in a feedforward network. Schuessler et al.'s research might be more analogous, given that their eigenvalue outliers emerge sequentially at varying rates. We do believe in the generality of the race analogy, and therefore cited works from several different domains in that paragraph.
---
Rebuttal Comment 1.1:
Title: Many thanks for your response
Comment: Many thanks for your response. I will raise my score based on the your statements that you will hugely increase the clarity of the paper. I am still not convinced how much this paper offers over Yang/Driscoll, but now see some places - many thanks. | Summary: The paper proposes "simplicity bias" in RNNs when learning multiple tasks simultaneously. In particular, the paper focuses on investigating the formation of attractors in the dynamic system of RNN when tasks with variant difficulties are handled jointly. The RNN develop attractors sequentially, and simple attractors will emerge first, and new attractors may organize with task demands or architecture constrains. The authors conducted extensive, carefully-designed numerical experiments to demonstrate their ideas. The findings shed light on understanding the dynamics of artificial RNNs and also cortical functions in the brain.
[post-rebuttal]
I appreciate the authors' reply. I recommend an acceptance.
Strengths: - The experiments are well-designed
- Analysis is conducted from various perspective
- The results are visualized and presented in an intriguing way
- Too my knowledge, the insights about simplicity bias in RNNs is novel (while there are a few works about other kinds of models, but the not from a dynamical system perspective).
Weaknesses: Overall, I think the paper did a good job on demonstrate the core concept of simplicity bias in RNNs. If there is something missing, I would say that it is unclear to what extend the conclusions apply. The tasks being tested are simple enough to understand, which contributes to more intuitive understanding for readers. However, the real-world tasks are usually much more complicated and it is hard to define or compare the diffulties of tasks. One particular design about model architecture is that the input and output channels for individual tasks are distinguished, which may not be the case for the brain or many ANN models. In sum, there remians much to do to further validate and understand the conclusions made in this paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - The abstract reads a bit too long and redundant, might it would be more concise to limit the absract to 200 words.
- Reference 5 repeats with 6
- Although I think it is reasonable to first consider the basic Elman-type RNN (eq. 1), I am also curious about whether other kinds of RNN such as LSTM and GRU features with similar simplicity bias, or there is something different.
- Some figures (e.g., Fig.2) is too cotmpact which makes it hard to read when printed on paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: This work is foundamental and rather like a proof of concept. Future work should try to clarify how much the conclusions can extend to broader model architectures and task properties.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive evaluation of our work.
Specific comments have been addressed in the general response to all reviewers, encompassing areas such as study limitations, GRU and LSTM discussions, and clarity aspects (e.g., Figure 2). We will refine the abstract to enhance its precision and conciseness. Thank you for the valuable suggestion.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply! I keep the original recommendation. | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive comments. Responses to common themes are here, and detailed individual responses are below.
Clarity: We sincerely apologize for the lack of clarity noted. We have already started clarifying the figures (examples in the 1-page PDF), and will continue doing so along with the text for the final version.
Technical Corrections:
Thanks for pointing out the technical issues in the text. We fixed them in the manuscript.
PDF:
We attach the newer versions of figures 3 and 4 from the paper, along with detailed captions.
Studying other architectures (GRU/LSTM):
Initially, we studied both GRUs (Gated Recurrent Units) and LSTMs (Long Short-Term Memory networks) in our experiments. These architectures, renowned for their enhanced memory capabilities, have shown rapid and efficient learning in our tasks. However, as we delved deeper into understanding the intrinsic properties of neural dynamics, certain features of these advanced architectures emerged as potential obstructions.
The strength of GRUs and LSTMs lies in their intricate internal gating mechanisms. GRUs, for instance, comprise three distinct gates, with the update gate being of particular interest. This gate allows the network to preserve its memory from previous time steps by choosing to propagate forward the input from past states, effectively resisting an immediate update based on new information. This "immunity" to immediate synaptic updates, although computationally powerful, deviates from our understanding of biological neural dynamics. During our experiments, we noticed that many networks leaned on a very small number of neurons as persistent task-bits, allowing them to maintain specific states indefinitely.
While this property is valuable for certain computational tasks, it poses questions about biological plausibility. Real biological neurons, as we understand them, do not exhibit such prolonged immunity to updates. Therefore, using architectures that permit such behaviors could lead us away from capturing foundational properties of biologically plausible dynamics.
In contrast, the Vanilla RNN, while more straightforward, offers a clearer lens through which we can study these foundational properties. It provides a more direct and interpretable mapping between input, internal dynamics, and output, making it better suited for our study's objectives.
For completeness, we attach a version of fig 3B that was generated with GRU networks, showing qualitatively similar behavior to vanilla networks.
Limitations of our study:
We appreciate the call to delineate the constraints of our research. Indeed, there are several factors that merit attention. We will also make sure these considerations are integrated into the discussion.
Biological Relevance: Even though our findings could hold implications for biological neural networks, drawing direct parallels remains a challenge. The architecture and function of biological networks are vastly more intricate, involving unclear connectivity, immensely differing scales of dimensionality, and overlapping neural processes, notably synaptic plasticity. Our study did not venture into multi-layer configurations, which might provide a more apt comparison to layered structures within the brain.
Network Parameters: The behavior of neural networks is inherently influenced by numerous free parameters. While we addressed several of these parameters, there remain others not explored in our study. For instance, while network size appeared to have a negligible impact on our results, the introduction of other activation functions or regularization techniques compromised the efficacy of training Vanilla networks.
Task Setup: Our approach to multi-task learning was crafted with systematic rigor, yet it represents just one among myriad potential configurations. Achieving a holistic understanding of multi-task representations may necessitate a combination of analytical methodologies, more exhaustive task setups, or a diversified array of such configurations.
Scalability: Our investigations were conducted on a specific scale with respect to the number of tasks and network size. It remains uncertain how our findings would extrapolate to more expansive datasets comprising hundreds or thousands of tasks. Additionally, while our networks effectively handled the tasks presented, they might exhibit different dynamical and representational behaviors when scaled to accommodate more complex task sets or larger architectures. The relationship between scale and the observed simplicity bias is an area ripe for further exploration
Pdf: /pdf/0f1dafef76f5a87827453a620dd008ad2b859713.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation | Reject | Summary: The paper addresses the problem of continual test time adaptation by proposing to add two branches of low-rank and high-rank adapters.
The paper claims that the low-rank adapter learns the domain-agnostic knowledge, whereas the high-rank adapter captures the domain-specific knowledge.
The paper also proposes a Homeostatic Knowledge Allotment (HKA) strategy to discern the contribution of the two branches.
Experiments are conducted for the classification problem on the benchmarks ImagenetC, CIFAR10C, and CIFAR100C, and for the semantic segmentation on the Cityscapes-to-ACDC benchmark.
The experimental results demonstrate the effectiveness of the proposed approach as it achieves state-of-the-art results using some selected pre-trained deep neural network architecture.
Strengths: 1. The paper tries to capture the domain-specific and domain-agnostic knowledge using two separate adapters: low-rank and high-rank adapters.
2. The HKA strategy to update the weight contribution of these adapters based on the uncertainty value of prediction is a nice idea.
3. Ablation study demonstrates the contribution of different components.
4. Experimental results for zero-shot generalization are new in the TTA setting.
Weaknesses: 1. No theoretical justification or even intuition is provided about why the low-rank adapter learns the domain-agnostic knowledge, whereas the high-rank adapter captures the domain-specific knowledge.
2. The backbone architecture is not consistent with previous works such as CoTTA. Section 3.4 in the supplementary material has only for CIFAR10C. What about the performance of the proposed approach for Imagenet-to-ImagenetC on ResNet50? What about the performance of the proposed approach for CIFAR100-to-CIFAR100C for ResNeXt-29 architecture?
3. Retraining the "model added with low/high-rank adapters" for some steps using the source data. This leads to the inability to use off-the-shelf pre-trained models without access to the source domain data.
Minor Comments
1. References should be conference/journal version, for example, 57 reference can cite the CVPR conference version
2. Line 175: Figure .2 --> Figure 2, and many more "Figure"
3. Line 237: Modal --> Model
4. Line 248: comparability. --> comparability
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Equation 4 depicts the formulation to modify the scale factors $\lambda_h$ and $\lambda_l$ based on uncertainty scores. What if the conditions to update the scale factors are reversed? In other words, when facing a sample with a high uncertainty value, decrease $\lambda_h$ (instead of increasing it), and vice versa for $\lambda_l$. It would be interesting to see what happens in such a scenario.
2. Why the low-rank adapter learns the domain-agnostic knowledge, whereas the high-rank adapter captures the domain-specific knowledge? Can the authors provide some intuition behind it, other than t-SNE plot of embeddings?
3. The backbone architecture for most experiments is not consistent with previous works such as CoTTA. Section 3.4 in the supplementary material has only for CIFAR10C. What about the performance of the proposed approach for Imagenet-to-ImagenetC on ResNet50 (missing in Table 1)? What about the performance of the proposed approach for CIFAR100-to-CIFAR100C for ResNeXt-29 architecture?
4. Why for the semantic segmentation task only the results for 3 rounds has been reported, as approaches such as CoTTA experiments for for 10 rounds?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. The experimental results indicate the effectiveness of the proposed approach for transformer based architecture and not much for convolutional neural network based architecture.
2. Addition of extra modules, such as adapters, requires further training of the model with adapters for one epoch on the source domain data. This is a limitation since the source domain data may not be available in many practical scenarios, and only the pre-trained model may be provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - Q1 'More intuitions of the low-rank adapter and high-rank adapter': Thank you for the constructive advice, please refer to the global rebuttal Q1, including the justifications of H-divergence verification[18], Class Activation Mapping (CAM) visualization, and long-term CTTA experiment.
- Q2 'CNN backbone': Thank you for your comprehensive feedback. We will further extend our CTTA experiments using CNN backbones in the final version. The following Table 1 presents the results of our ImageNet-to-ImageNet-C CTTA experiment with ResNet50 as the backbone. Our approach substantially reduces the error rate by 20.8% compared with the source model and achieves competitive performance compared with other CTTA methods. Additionally, our CIFAR100-to-CIFAR100C CTTA experiment with the ResNeXt-29 backbone is detailed in the following Table 2. Our proposed method shows a promising result, further underscoring the effectiveness of our approach in addressing the CTTA challenge with a CNN backbone. Given the remarkable performance and generalization capability demonstrated by vision transformers in visual recognition tasks [14], we leverage transformer models as the backbone on classification and segmentation CTTA settings. Meanwhile, since parameter-efficient fine-tuning (PEFT) methods (i.e., adapter, prompt) are more adaptive for transformer architecture [10,20,29], we exhibit transformer-based CTTA results in the main experiment.
| Method(ResNet50) | Source | BN adapt[31] | Tent[35] | CoTTA[36] | Ours |
| --- | --- | --- | --- | --- | --- |
| Average error rate | 82.0 | 68.6 | 62.6 | 62.7 | 61.2(+20.8%) |
| Method(ResNeXt-29) | Source | BN adapt[31] | Tent[35] | CoTTA[36] | Ours |
| --- | --- | --- | --- | --- | --- |
| Average error rate | 46.4 | 35.4 | 60.9 | 32.5 | 31.5(+14.9%) |
- Q3 'Pre-train on source data': In Lines 254-255, we pre-train the initial adapter in the source domain for several iterations before adapting to continual target domains, aiming to establish a stable initial parameter base for ViDAs. However, this pre-training step is optional and doesn't compromise the effectiveness of our approach. Illustrated in the following Table (Cityscape-to-ACDC CTTA), ViDAs exhibit notable enhancements on the CTTA challenge even with random initial parameters or parameters pre-trained on ImageNet. To better showcase the practicality, we will add the experiments of other parameter initialization techniques in the final version.
| | Adapter pretrain | Fog | Night | Rain | Snow | Mean (IoU) |
| --- | --- | --- | --- | --- | --- | --- |
| Source [58] | - | 69.1 | 40.3 | 59.7 | 57.8 | 56.7 |
| CoTTA [57] | - | 70.9 | 41.2 | 62.4 | 59.7 | 58.6 |
| Ours | Source | **71.6** | 43.2 | **66.0** | 63.4 | 61.1 |
| Ours | Random initial | **71.6** | 43.6 | 64.9 | 61.9 | 60.5 |
| Ours | ImageNet | **71.6** | **44.3** | **66.0** | **63.5** | **61.4** |
- Q4 'Inversed HKA': In response to the review comments, we conduct an additional experiment to invert the scale factors within the Homeostatic Knowledge Allotment (HKA) strategy. Specifically, for samples exhibiting high uncertainty, we reduce λh while increasing λl. As shown in the following Table, building upon the ablation study (Table 6) of the main paper, we integrate the **Inversed HKA** approach into Ex4, which already incorporates both low-rank and high-rank ViDAs. This adaptation yields an average error rate of 46.3 on ImageNet-C, marking a slight 0.7% error rate increase compared to Ex4. This experiment underscores how the proposed HKA strategy promotes the different domain representations between low-rank and high-rank VIDA models.
| | Contributions | Average error rate |
| --- | --- | --- |
| Ex4 | ViDAh + ViDAl | 45.6 |
| Ex5 | ViDAh + ViDAl + HKA | 43.4 |
| **Inversed HKA** | ViDAh + ViDAl + Inversed HKA | **46.3** |
- Q5 '10 rounds CTTA': We further present the segmentation CTTA experiment with 10 rounds on the rebuttal PDF Table 2. Notably, it demonstrates a consistent enhancement in mean mIoU during the initial rounds (rounds 1-3) while maintaining stable performance in subsequent rounds. Meanwhile, we also provide 10 rounds of classification CTTA experiment on rebuttal PDF Figure 2. Our proposed method (red line) shows a consistently declined average error rate during the long-term adaptation process, demonstrating its effectiveness and stability in addressing continual domain shifts. Finally, we will address minor comments and incorporate the comprehensive 10 rounds experiment into the final version.
---
Rebuttal Comment 1.1:
Title: Most of the Questions are Addressed in the Rebuttal
Comment: I want to thank the authors for addressing my comments and questions.
Other reviewers had similar questions regarding the utility of low-rank/high-rank adapters, which the authors have addressed to some extent in the rebuttal.
The additional experimental results in the rebuttal shows the proposed approach's effectiveness.
Inversed HKA results empirically support the intuition that the high-rank adapter focuses on domain-specific knowledge, and the low-rank adapter focuses on domain-agnostic knowledge.
However, the visualization of CAM, similar to Figure 3, for Inversed HKA would have further supported the claim.
Based on the response, I do not have any further queries.
Thank you.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 56Wj
Comment: Dear Reviewer 56Wj,
We greatly appreciate your valuable feedback and the acknowledgment of our intuition that the high-rank adapter emphasizes domain-specific knowledge, while the low-rank adapter focus on domain-agnostic knowledge. Your crucial suggestion to introduce an inverted HKA experiment is immensely appreciated, and we are committed to integrating these additional rebuttal experiments into the final version.
Unfortunately, due to restrictions at the current stage, we are unable to include figures or anonymous links in our official comments. However, rest assured that in the final version, we promise to add the CAM result of the inverted HKA to further substantiate our contributions. Thank you so much for your time and insightful comments.
Best Regards | Summary: This paper proposes to utilize domain-specific and domain-agnostic knowledge to tackle the error accumulation and catastrophic forgetting problem and boost the performance of continually test-time adaptation task. The proposed visual domain adaptor (ViDA) aims to adapt current domain distribution and maintain the continual domain-shared knowledge in CTTA, while a homeostatic knowledge allotment strategy is designed to fuse the knowledge. The experiments demonstrate that the method achieved SOTA.
Strengths: 1. The logic of this paper is clear and the performance is godd.
2. The whole method is easy to understand and implement.
3. They conduct sufficient experiments to prove their proposal.
Weaknesses: 1. The authors mentioned that the proposed method ensures no extra parameter increase in Section 1 (L64) and Section 3 (L195). However, extra parameters and computational costs are considered as limitations in L335, which brings contradiction and unclarity. Adding clear explanations would be appreciated.
2. The authors argues that ViDA with a low-rank prototype focuses on domain-agnostic knowledge while ViDA with a high-rank prototype concentrates more on domain-specific knowledge. Explanation on the Fig.1 is not sound enough.
3. How are the different ViDAs projected into the pre-trained model by re-parametrizations? Please provide more details.
4. Other suggestions:
L261 Table .1 -> Table 1
L275 Table .2 -> Table 2
L296 Table. 3 -> Table 3
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations on extra computational costs are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - Q1 'Extra parameter': We appreciate your insightful feedback. In the final version, we will provide clearer explanations of parameter usage. Regarding Lines 64 and 195, we elaborate on the fact that ViDAs can be re-parameterized and projected into the original model due to their linear relationship, ensuring no additional parameters for the **network**. Different from the network parameters discussion, in Line 335, we aim to express that our entire **training framework** (teacher-student model) will increase the extra computational costs during test-time parameters optimization.
- Q2 'More explanation of different domain representation for ViDAs': Thank you for the constructive advice, please refer to the global rebuttal Q1, including the justifications of the H-divergence verification[18], Class Activation Mapping (CAM) visualization, and long-term CTTA experiment. Besides, we design a verification experiment using adapters with the same structures, aiming to demonstrate the necessity of low-rank ViDA and high-rank ViDA. We have executed an ImageNet-to-ImageNet-C CTTA experiment using a combination of two high-rank adapters or two low-rank adapters, as shown in the rebuttal PDF Table 1. To ensure fairness, we conducted these experiments without implementing the homeostatic knowledge allotment (HKA) strategy. Notably, the two low-rank adapters (Ex2) demonstrated consistently lower long-term error rates compared to the source model and two high-rank adapters. Above results can be attributed to the fact that the two low-rank ViDAs tend to learn more general information and domain-invariant knowledge during continual adaptation. However, our method outperforms the two low-rank adapters across 14 out of 15 corruption types. This indicates that solely relying on low-rank adapters without the involvement of high-rank adapters is insufficient to fit target domains and match their data distribution. On the other hand, the performance of the two high-rank adapters initially surpasses ours (Ex4) in the early stages, covering the first few target domains. Nevertheless, a noticeable performance degradation becomes apparent in later target domains. This observation underscores a crucial finding: while increasing the number of high-rank ViDAs might enhance domain-specific knowledge acquisition during the initial phases of CTTA, it simultaneously exacerbates catastrophic forgetting throughout the entire adaptation process. In contrast, the fusion of both low-rank and high-rank ViDAs (Ex4) yields the most substantial improvement when compared to other configurations. This collaborative approach leverages the distinct domain representations of these adapters to compensate for each other's advantages and achieve a more robust and effective continual adaptation strategy.
- Q3 'Re-parametrizations': Following the linear design of the adapter, we integrate trainable ViDAs into the original model, ensuring no additional parameters burden the network. Since fully connected layers of low-rank and high-rank ViDAs are of linear relationship, the re-parametrization process for both is identical. Specifically, we denote the parameter matrices Wup and Wdown to represent the up-projection and down-projection linear layers, respectively. The adapter's output Ya is computed as Ya = Wup(Wdown(x)), where x is the input. Given the linear relationship between Wup and Wdown, we construct a composite parameter matrix Wa, where Wa = Wup x Wdown. Consequently, the adapter's output can be expressed as Ya = Wa(x). Meanwhile, Wo signifies the parameter matrix of the original model's fully connected layer. The fused output Yf is computed as Yf = Wo(x) + Wa(x), or succinctly as Yf = (Wo + Wa)(x). Analogously, due to the linear relationship between Wo and Wa , the new parameter matrix Wf (Wf = Wo + Wa) is constructed to re-express the fused output Yf = Wf(x). Finally, we will fix all the suggestions in the final version.
---
Rebuttal 2:
Title: Looking Forward to Seeing Your Response!
Comment: Dear reviewer dPMr,
As the discussion phase is quickly passing, we want to know if you have any further questions or suggestions, and we are more than happy to discuss. Thanks again for your valuable reviews!
Best, All anonymous authors
---
Rebuttal 3:
Title: Waiting For Your Response
Comment: Dear Reviewer dPMr,
With the discussion phase swiftly progressing, we seek to confirm if our response addresses your concerns. Feel free to inquire about any remaining questions, and we'll provide prompt responses. If your concerns have been addressed, we would greatly appreciate it if you considered raising your score. Thank you for your valuable time and insightful comments.
Best, All anonymous authors | Summary: This paper aims to address continual test-time adaptation (CTTA) with parameter-efficient fine-tuning techniques, i.e., adapter. The authors find that the low-rank adapter i.e., standard bottleneck structure, can extract domain-invariant knowledge. On the other hand, a high-rank adapter can extract more domain-specific knowledge. Experiments on three image classification benchmarks and one semantic segmentaiton benchmark demonstrate the effectiveness of the proposed method based on ViT backbone.
Strengths: 1. The paper is easy-to-follow.
2. The topic is essential yet the idea is moderate. Both continual test-time adaptation and parameter-efficient fine-tuning are important topics for the community and the paper addresses two formulations simultaneously.
3. A new method that seems well-motivated and performs well on classification and segmentation benchmarks.
Weaknesses: 1. The major concern is that are the learned features domain-invariant related to the structure of the adapter? That is, low-rank adapter can learn domain-invariant features while high-rank adapter can learn domain-specific features. How about all adapters adopt the same structures?
2. What is the motivation of the teacher model? Just following CoTTA?
3. More relevant works [a,b,c, d, e] should be discussed.
4. What is the design of low-rank and high-rank adapter in CNN architecture? Why not the auhors report the result of ResNet50 in Table 1?
5. It seems that the proposed ViDA module is memory-extensive since all experiments are conducted on A100 GPU. However, the backbone is just ResNet50 or ViT-base. Thus, does it mean that PEFT is memory-extensive or the proposed ViDA module is extensive?
6. It can be seen from the results in Table 3 that the model performance with the pre-trained encoder parameters of SAM significantly decreases. But the reason is unclear. It might be that SAM is pixel-level foundation model but the performance gains with image-level foundation model DINOv2 are limited. Also, I am interested in the results for Cityscapes-to-ACDC with SAM pre-trained parameters.
Refs:
[a] Niu et al., Towards stable test-time adaptation in dynamic wild world.
[b] Yuan et al., Robust test-time adaptation in dynamic scenarios.
[c] Song et al., EcoTTA: Memory-Efficient Continual Test-time Adaptation via Self-distilled Regularization.
[d] Gong et al., NOTE: Robust continual test-time adaptation against temporal correlation.
[e] Döbler et al., Robust mean teacher for continual and gradual test-time adaptation.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. What do t-SNE results of other blocks look like? Why the authors select the third transformer block to analye in Fig. 1 (b).
2. What is the meaning of ``prototype``?
3. Minor:
- ref [55] and ref [56] are duplicated.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors claimed one limitation of this work, i.e., introducing extra parameters and computational cost.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - Q1 'Different domain representation of ViDAs': Thank you for the comprehensive comments. The experiment analysis of all adapters with the same structure is shown in Reviewer#vo5W Q3 while more justifications are illustrated in the global rebuttal Q1.
- Q2 'Why teacher model': Motivated by the fact that the mean teacher predictions have a higher quality than the standard model [52], we utilize a mean-teacher model to provide more accurate predictions in the continual adaptation process. Meanwhile, since [e] observes that mean teachers are more robust in dynamic environments, we leverage the teacher model to maintain the stability and previous domain knowledge during continual test-time adaptation (CTTA). Besides, we aim to conduct a fair comparison with previous CTTA works [16,57], thus we utilize a similar teacher-student framework in our pipeline. Notably, the flexibility of ViDA allows integration with various optimization methodologies, like Tent [56].
- Q3 'More related discussion': References [a] and [b] focus on tackling TTA in dynamic scenarios, which is different from traditional CTTA addressed by [16,57,c]. Specifically, [a] demonstrates the benefits of batch-agnostic norm layers compared to BN under wild test settings and addresses model collapse, however, it doesn't directly tackle the continual shift challenge. [b] introduces a robust BatchNorm layer for Practical Test-Time Adaptation (incorporating distribution changes and correlation sampling), but ignores catastrophic forgetting in dynamic environments. Notably, [d] introduces instance-aware normalization and prediction-balanced reservoir sampling for stable adaptation but it does not explicitly consider instability facing long-term shifting distributions. In contrast, Ecotta [c] introduces a meta-network for the CTTA problem, aiming to avoid error accumulation by regularizing the outputs from meta-network and frozen network. However, a significant portion of the frozen network only retains source domain knowledge, overlooking target domain knowledge. On the other hand, RMT [e] tackles error accumulation via gradient analysis and introduces a symmetric cross-entropy loss for CTTA. Our approach diverges from these by employing a novel adapter-based adaptation scheme that adopts different domain representations of low-rank and high-rank ViDA to explicitly tackle the catastrophic forgetting and error accumulation problems. As shown in the following Table, our method achieves superior performance compared to RoTTA[b], NOTE [d], and EcoTTA [c] on Cifar10-C with WideResNet-28 backbone. Due to ViDA is a parameter-efficient and plug-and-play method, it has the potential for integration with other methods to collectively tackle the challenge of continual distribution shift.
| Cifar10-C(WideResNet-28) | RoTTA[b] | NOTE [d] | EcoTTA [c] | RMT [e] | Ours |
| --- | --- | --- | --- | --- | --- |
| Average error rate | 17.5 | 20.2 | 16.8 | 14.5 | 15.8 |
- Q4 'CNN architecture': In Lines 181-183 of the main paper and Lines 111-114 of the supplement, instead of employing linear layers, we have employed 1 × 1 convolutional layers to construct both down-projection and up-projection layers for CNN backbones. As shown in the following table, we carry out an ImageNet-to-ImageNet-C CTTA experiment with a ResNet50 backbone. Our method achieves a 61.2% average error rate, obtaining a substantial decrease of 20.8% compared to the source model. Meanwhile, the CIFAR100-to-CIFAR100C CTTA experiment with the ResNeXt-29 backbone also shows the effectiveness of our approach. These findings show the robust performance of our method in addressing the challenges of the CTTA problem with a CNN backbone. Given the remarkable performance and generalization capability demonstrated by vision transformers [14], we select our CTTA settings to employ transformer backbones.
| Method(ResNet50) | Source | BN adapt[31] | Tent[35] | CoTTA[36] | Ours |
| --- | --- | --- | --- | --- | --- |
| Average error rate | 82.0 | 68.6 | 62.6 | 63.0 | 61.2(+20.8%) |
| Method(ResNeXt-29) | Source | BN adapt[31] | Tent[35] | CoTTA[36] | Ours |
| --- | --- | --- | --- | --- | --- |
| Average error rate | 46.4 | 35.4 | 60.9 | 32.5 | 31.5(+14.9%) |
- Q5 ' Extensive': When utilizing backbone such as ResNet50 or ViT-base, ViDA is not memory-extensive. In our ImageNet-to-ImageNet-C CTTA experiment, our pipeline utilizes only 18GB out of the available 80GB GPU memory on A100 (image resolution = 224x224 and batch size = 32).
- Q6 'SAM': We conduct segmentation CTTA with SAM pre-trained parameters on the Cityscapes-to-ACDC scenario. However, it's worth noting that Segformer [58], which we used in the main experiments, does not incorporate positional encoding. We thus adopt the SETR [f] model as our new baseline to load SAM's pre-trained parameters. As shown in the following Table, Our approach with SAM pre-trained parameters outperforms others on the ACDC target domains. This aligns with your assumption: SAM, being a pixel-level foundational model, excels in capturing fine-grained feature representations in dense CTTA tasks. [f] Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers
| | Pretrained | Fog | Night | Rain | Snow | Mean (IoU) |
| --- | --- | --- | --- | --- | --- | --- |
| Source | Source model | 72.6 | 43.1 | 63.0 | 64.3 | 60.8 |
| Source | SAM | 74.8 | 44.1 | 66.7 | 66.6 | 63.0 |
| Cotta | SAM | 75.4 | 45.9 | 67.3 | 68.7 | 64.3 |
| Ours | SAM | **76.5** | **47.2** | **68.1** | **70.7** | **65.6** |
- Q7 't-SNE': Due to space limitation, we analyze the middle layer of the backbone in our t-SNE study. Additional t-SNE results for the first and last transformer blocks are available in rebuttal PDF Figure 1, showing the same tendency as the results from the third transformer block.
- Q8 'Prototype': The prototype refers to the intermediate feature latent space of ViDAs. And we will address the 'Minor' issue in the final version.
---
Rebuttal Comment 1.1:
Title: Further discussion
Comment: Dear authors,
I really appreciate the authors' responses. I have read your rebuttal and almost of my concerns have been solvedand. Here are my comments.
First, I am indeed surprised at the ability of different structures (low-rank & high-rank) to learn different features (domain-invariant &domain-specific). The authors should have emphasised these interesting findings in the manuscript.
Second, with the rapid development of the field of test-time adaptation in recent two years, the authors should pay more attention to the latest works and then discuss and compare them as much as possible to highlight the strengths of this paper.
Thirdly, I am very grateful for the results provided in the rebuttal phase and hope to see more analyses in the camera-ready version! For example, integration with other methods and experiments on ResNet, etc.
Certainly, my current inclination is to extend my support to this paper, and have revised my rating.
---
Reply to Comment 1.1.1:
Title: Further Discussions for Reviewer DEpm
Comment: Dear reviewer DEpm,
We appreciate your valuable feedback and the acknowledgment of our proposed method.
Firstly, we will elaborate on the justifications for the diverse domain representations of low-rank ViDA and high-rank ViDA in the manuscript, as evidenced in our rebuttal responses.
Secondly, we would like to extend our gratitude for your insightful suggestions for improvement. We intend to delve deeper into the latest test-time adaptation approaches (e.g., [a][b][c][d][e]) and establish a comprehensive comparison to underscore our method's strengths in the final version.
Furthermore, we are committed to incorporating the rebuttal experiments and providing in-depth analysis in the camera-ready version. It's important to note that we will particularly focus on refining the CNN-based CTTA experiments and validating the seamless integration capability of our proposed method.
Tank you again for your time and insightful comments. | Summary: This paper proposes a continual test-time adaptation method by designing a visual domain adapter (ViDA) to handle both domain-specific and domain-agnostic knowledge. To adapt to different distribution shifts, a homeostatic knowledge allotment strategy is proposed to adaptively merge knowledge from each ViDA with different rank prototypes. Experiments on four benchmark datasets demonstrate the effectiveness of the proposed method for both classification and segmentation CTTA tasks.
Strengths: - The paper introduce homeostatic visual domain adapter for continual test time adaptation.
- Good results are achieved on both classification and segmentation tasks.
Weaknesses: - In general, the proposed components are not very well justified and verified. The details are listed in the following points.
- Except for the empirical results, it lacks more in-depth analyses and justifications on why a low-rank prototype focuses on domain-agnostic features while a high-rank prototype focuses on domain-specific knowledge. In addition, Figure 3 (a) shows the reduction of inter-domain divergence due to the introduction of the low-rank adapters. The reviewer was wondering what would be the results of the inter-domain divergence by using the high-rank adapters.
- In table 1, the experimental results are not clearly explained. For example, what are the difference between the last two rows?
- In Table 6, both the ViDAh and ViDAl could improve the performance significantly. However, it is still unclear why the different adapters (high-rank and low-rank) necessary. How to effectively show that they are complement to each other? It would be interesting to know whether two high-rank adapters or two low-rank adapters could also achieve good results?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Why a low-rank prototype focuses on domain-agnostic features while a high-rank prototype focuses on domain-specific knowledge?
- How to effectively show that ViDAh and ViDAl are complement to each other?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors addressed some of the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - Q1: 'Different domain representation of low-rank ViDA and high-rank ViDA': Thank you for the constructive advice, please refer to the global rebuttal Q1, including the justifications of H-divergence verification[18], Class Activation Mapping (CAM) visualization, and long-term CTTA experiment.
- Q2 ' More explanations of Table 1': Thank you for the detailed comments, we will add more explanations of Table 1 in the camera-ready version. The second row from the bottom of the table represents the standard ImageNet-to-ImageNet-C Continual Test-Time Adaptation (CTTA) experiment using our proposed method. Evidently, our approach outperforms both the source model and the prior state-of-the-art (SOTA) method, showing the effectiveness of our approach in addressing continual domain shifts. In reference to the last row, please refer to Lines 150-152 and Lines 267-268 for the explicit description of the experimental setup. This setup has been devised to validate our method's capability to mitigate the problem of catastrophic forgetting. Specifically, we preserve the final parameters of both the model and ViDAs after completing a round of CTTA. Subsequently, we utilize these fixed parameters to evaluate the model's performance across all previously encountered target domains within ImageNet-C, without further parameter updates. As illuminated in Table 1, Our method showcases an improvement of 1.0% in the average classification error compared to the result presented in the second row from the bottom. This outcome robustly substantiates that our approach effectively avoids the problem of catastrophic forgetting within the context of a continually changing environment.
- Q3 ‘Complement each other’: Thank you for your valuable insights. We have executed an ImageNet-to-ImageNet-C CTTA experiment using a combination of two high-rank adapters or two low-rank adapters, as shown in the rebuttal PDF Table 1. To ensure fairness, we conducted these experiments without implementing the homeostatic knowledge allotment (HKA) strategy. Notably, the two low-rank adapters (Ex2) demonstrated consistently lower long-term error rates compared to the source model and two high-rank adapters. Above results can be attributed to the fact that the two low-rank ViDAs tend to learn general information and domain-invariant knowledge during continual adaptation. However, our method outperforms the two low-rank adapters across 14 out of 15 corruption types. This indicates that solely relying on low-rank adapters without the involvement of high-rank adapters is insufficient to fit target domains and match their data distribution. On the other hand, the performance of the two high-rank adapters initially surpasses our approach (Ex4) in the early stages, covering the first few target domains. Nevertheless, a noticeable performance degradation becomes apparent in later target domains. This observation underscores a crucial finding: while increasing the number of high-rank ViDAs might enhance domain-specific knowledge acquisition during the initial phases of CTTA, it simultaneously exacerbates catastrophic forgetting throughout the entire adaptation process. In contrast, the fusion of both low-rank and high-rank ViDAs (Ex4) yields the most substantial improvement when compared to other configurations. Our collaborative approach leverages the distinct domain representations of these adapters to compensate for each other's advantages and achieve a more robust and effective continual adaptation strategy.
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: The authors have conducted additional extensive experiments and most of my concerns are addressed. So I would like to update my rating and lean towards acceptance.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer vo5W
Comment: Dear Reviewer vo5W,
We greatly appreciate your valuable feedback and the acknowledgment of our proposed method that introducing high-rank and low-rank adapters to extract domain-specific and domain-invariant knowledge in the CTTA problem, respectively. And we are committed to incorporating the additional experiments and providing in-depth analysis in the camera-ready version.
Best, All anonymous authors
---
Rebuttal 2:
Title: Looking Forward to Seeing Your Response!
Comment: Dear Reviewer vo5W,
Given the discussion phase is quickly passing, we want to know if our response resolves your concerns. If you have any further questions, we are more than happy to discuss them. Thanks again for your valuable suggestions!
Best, All anonymous authors | Rebuttal 1:
Rebuttal: **To ALL:**
- Q1. 'Different domain representations of low-rank ViDA and high-rank ViDA'.
- Thank you for the comprehensive comments, and we will add the in-depth analyses and justifications in the final version, including different domain representations of low-rank and high-rank ViDAs. In section 3.1 and Figure 3 (a) of the main paper, according to previous domain transfer research (DANN [18]), we employ the **H-divergence metric** to evaluate the domain representations of ViDAs across different target domains. Following the suggestion of Reviewer vo5W, “verification the inter-domain divergence by using the high-rank adapter”, we supplement the results in the rebuttal PDF Figure 4, where the visual representation showcases the different knowledge extraction of ViDAs. These experiments have been conducted within the CIFAR10-to-CIFAR10C Continual Test-Time Adaptation (CTTA) scenario. As shown in the rebuttal PDF Figure 4, the feature representation produced by the high-rank adapter displays a more pronounced divergence when compared to the output of the low-rank adapter. It is important to note that when dealing with later target domains or encountering substantial domain shifts between two adjacent domains (as seen in the case of target domains 9-13), the inter-domain divergence of the high-rank adapter shows a notably higher value than others and presents a close divergence value compared with the original source model. In conjunction with the lower intra-class divergence presented in Figure 3(b) of submission, it is evident that while the high-rank adapter effectively facilitates domain-specific knowledge extraction in the current target domain, it simultaneously forfeits some of the domain knowledge acquired from preceding domains. In contrast, as shown in the rebuttal PDF Figure 4, the feature representation generated by the low-rank adapter continually demonstrates lower divergence compared to the others. Since the low-rank structure minimizes the redundancy within the feature representation, it compels the model to concentrate more on task-relevant information and mitigate the problem of catastrophic forgetting within the continual adaptation process.
- In addition, we have extended our analysis by incorporating the visualization of **Class Activation Mapping (CAM)** on the ImageNet-to-ImageNet-C CTTA scenario. Specifically, we adopt CAM to compare the attention of the low-rank branch, high-rank branch, and the original model during the continual adaptation process. To elaborate, as shown in the rebuttal PDF Figure 3, we showcase the feature representation of the images from different target domains, including the noise of Gaussian and defocus blur. Our observations reveal that the low-rank ViDA is inclined to put more weight on the foreground sample while tending to disregard background noise shifts. This indicates that the low-rank ViDA attends to locations with more general and domain-agnostic information from target domains. Conversely, the high-rank ViDA exhibits an inverse pattern, as illustrated in the last column of rebuttal PDF Figure 3. It allocates more attention to locations characterized by substantial domain shift, encompassing the entirety of the input images. This behavior aligns with the high-rank branch's tendency to fit global information and predominantly extract domain-specific knowledge from the target domain data.
- In order to provide further substantiation for the distinct domain representations of the low-rank and high-rank ViDAs, we have executed a **10 rounds CTTA experiment** on ImageNet-to-ImageNet-C. The outcomes of this experiment can be accessed through the rebuttal PDF Figure 2. In this comprehensive experiment, we simulate a long-term adaptation scenario by repeating 10 rounds of 15 corruption sequences in the ImageNet-C. Remarkably, the high-rank ViDA achieves competitive results over other methods during the initial 1 to 3 rounds. This result demonstrates the high-rank feature's capacity to efficiently learn target domain-specific knowledge. However, an increment in error rates becomes obvious during the later rounds (rounds 5 to 10). The results validate the potential for encountering catastrophic forgetting when focusing exclusively on domain-specific knowledge. In contrast, the performance of the low-rank ViDA remains consistently robust throughout the continual adaptation process, verifying it concentrates more on extracting task-relevant knowledge and effectively prevents the catastrophic forgetting problem.
- In conclusion, our analysis and justifications support our hypothesis concerning the different domain representations of low-rank and high-rank ViDAs. In the case of low-rank ViDA, its structure reduces feature redundancy, leading to an underfit state during CTTA. As a result, it leans towards acquiring general information across continuous target domains, extracting domain-agnostic knowledge to mitigate catastrophic forgetting. In contrast, high-rank ViDA employs a higher-dimensional feature representation that better matches the data distribution in target domains, focusing on learning domain-specific knowledge to prevent error accumulation. Importantly, the specialized structures of low-rank and high-rank ViDAs contribute to distinct domain representation learning, which we further control using the Homeostatic Knowledge Allotment (HKA) strategy. For instance, when meeting the sample with a large distribution shift, we will increase the fusion weight of the high-rank feature (Equation 4), aiming to extract more domain-specific knowledge through the high-rank ViDA.
Pdf: /pdf/56eb21a6950556d42f148a3619dcf865f995217a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Stable and low-precision training for large-scale vision-language models | Accept (poster) | Summary: The paper proposes methods to improve the training of large, vision-language models. On the one had, the authors propose a new linear layer for int8 quantized trainings, dubbed SwitchBack. On the other hand, a new optimizer is presented, StableAdamW, which results from combining AdamW with the update clipping technique proposed by AdaFactor [52]. These two simple but effective measures allow for faster and more stable training of large vision-language models.
Strengths: - Simply having to use 8 bit precision for the first two matrix multiplies of a linear layer (forward pass to computer output + backward pass to compute gradients of the input) is a powerful way to achieve notable speed-ups while keeping the same performance as when using 16-bit training. I find this a very insightful contribution that can accelerate training for many projects that leverage transformer architectures.
- The contributed StableAdamW (AdamW + update clipping from AdaFactor) addresses instabilities that can arise during training of large vision-language models. This is a well-known issues in the community, which could have major impact for future research. The fact that StableAdamW outperforms other stability control measures such as gradient clipping or setting a lower β2 already opens up the door for very easy stability fixes across the board for vision-language models, at least for those CLIP-based.
- Detailed comparisons and results are presented.
Weaknesses: - Could be interesting if the authors briefly discussed whether the results found in this paper generalize to downstream tasks, e.g., object detection or semantic segmentation. In other words, can other researchers directly apply SwitchBack layers in other transformers architectures, such us OneFormer? Can we also directly use StableAdamW? This is mentioned in the limitations section, which is appreciated. But any further points on this could be valuable to the community, as many of the breakthroughs in classification tasks have been found not to always easily translate into gains on downstream tasks.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Minor typo in line 282? “for in AdamW.”
- I assume that code will be released. Could the authors confirmed? If the results presented here are indeed generalizable to vision-language models in general, it could have major implications for the advancement of the community.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful comments and thorough review.
- Weakness 1: We agree that the paper would benefit from a discussion on whether we can expect the results to generalize to downstream applications. Our analysis in Section D of the supplementary material suggests that the approach is best applied when the inner dimension of the matrix multiplication for the weight gradient computation is large, which is typically the case for large batch training. Moreover, we believe that one of the benefits of CLIP style models is that they can be used in downstream applications without modifying the weights. That is, we believe that our models may be used for a drop-in replacement in stable diffusion to provide speed-ups. We will revise the paper to elaborate on this discussion.
- Question 1: Thank you very much for catching this, it is indeed a typo.
- Question 2: We confirm that code will be released. Indeed, we have focused on making our triton kernels clear and hackable. While we have optimized our kernels to the best of our ability, we believe that by open-sourcing them and making them hackable, the community will be able to iterate and further the already observed speed-ups.
---
Rebuttal Comment 1.1:
Comment: Thank you. It sounds good. Appreciate the effort to make this available to the community.
Please fix the typos and add the proposed clarifications.
Thanks!
---
Reply to Comment 1.1.1:
Comment: Of course, thank you again for your helpful comments and thorough review. | Summary: The paper is a study of different ingredients necessary to train an int8-quantized CLIP model. They mainly address two aspects:
1. quantization: they built on top of techniques previously applied to LLM inference (LLM.uint8) and expand it to the CLIP setting to optimize both inference and training; the 13-25% speedup of training and negligible (~0.1%) quality drop is achieved by carefully selecting which parts of the model are quantized to 8 bits (forward pass and part of backward pass) and which to 16 bits (part of backward pass),
2. stabilization: they observed spikes in the loss and correlate it with times where the state of Adam optimizer gets outdated; this is mitigated by applying the same trick to Adam which was already established in the literature to stabilize AdaFactor
Strengths: 1. The authors manage to train the biggest quantized CLIP up-to-date, with negligible quality drop. Learning that this is possible to do is a valuable contribution.
2. The source of training instability at scale is identified and characterized, and a way to mitigate it is proposed.
Weaknesses: The proposed quantization method targets linear layers, which are “usually >90% of compute in standard transformer models”. However, it seems that there is significant complexity introduced for a relatively small speedup of 13-25%. Was that expected? What are the remaining bottlenecks which remain to be solved? It would be interesting to get more details around this topic.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is the training instability present only when quantization is used, or even without it?
2. How to explain that loss curves for grad clipping and adam clipping are the same, but there is a difference in imagenet zero-shot accuracy in Table 5? Is this difference repeatable over several runs, or could it be measuring noise?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful review. We hope the following addresses your interesting questions and comments.
- Weakness 1: We believe there are a number of reasons that the speed-up is only 13-25%. For one, 16-bit baselines have had years of hardware support and optimization, while the same is not true for eight bit formats. We believe that more speedups can be expected as hardware support improves. For example, initial data from H100 GPUs indicates that 8-bit speedups are much higher for these GPUs, but we did not include these benchmarks since we have no access to H100 GPUs. In addition, as we illustrate in Figure 6 and 10 of the supplementary material, the quantization operations (i.e., changing from 16 bit to 8 bit precision) incur overhead which reduces the overall speed-up. However, as we show in Figure 7 of the supplementary material, this overhead decreases in proportion at scale.
- Question 1: The instabilities we discuss in Section 3 are present even without quantization.
- Question 2: We believe that this is not noise because we consistently observe a difference across all four values of beta2 that we try as illustrated in Figure 5 (right).
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. | Summary: This paper introduces an efficient and stable INT8 training method for models similar to CLIP. The proposed method offers a training latency improvement of 10-20%, which could account for a significant portion of training costs for larger models. The authors leverage LLM.int8() kernels for training, taking into account the quantization challenge for weights. Additionally, the paper explores the potential issues associated with FP8 training for such models.
Strengths: - The paper is beneficial to the community due to the rising interest in low-precision training, such as INT8 and FP8. In particular, understanding how FP8, which is supported by H100 GPUs, performs across different models is essential. This work provides useful insights into these topics.
- The authors propose system/hardware-aware methods, including open-sourcing triton implementations and fused kernels, which could prove valuable for many researchers.
- The zero-init layer-scale method, a simple yet effective approach to tensor-wise quantization, is proposed.
- The systematic study of loss spike offers many points for consideration and deeper analysis.
- The introduction of the novel predictive power of patch embedding RMS is noteworthy.
Weaknesses: - The paper's SwitchBack approach seems to lack novelty, as it simply observes the correlation of inner dimensions with quantization noise. The authors could explore more ingenious solutions to tackle the large inner dimension of weight gradient matmul, rather than just resorting to higher precision computing.
- Figure 1 raises a concern as LLM.int8() might not serve as an adequate baseline. LLM.int8 was designed for LLMs, not models like CLIP. Additional PTQ baselines, including MRE methods, should be considered for int8 training.
- There's a fundamental curiosity about the applicability of low-precision training like INT8/FP8. While we can establish an FP16 baseline for well-known models, it becomes challenging when we train newly updated models or datasets with no baseline. If the training fails, it's unclear if the failure stems from precision issues or model issues. Although this paper's analysis and proposal could assist in these processes, the argument still stands: why use low-precision training if stable results can be obtained with slightly slower speeds? Because this open question applies to all low-precision training and not solely to this paper, I will respect feedbacks from other reviewers in this aspect.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: included in weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: included in weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your insightful comments. We hope the following addresses any concerns.
- Weakness 1: We absolutely agree that resorting back to higher precision is a weakness and a more clever technique could alleviate this. We thank you for highlighting this as we believe it’s one of the most promising avenues for future research, and will revise the paper to indicate this shortcoming.
- Weakness 2: Our focus is on faster training via quantization and not quantization after training. Appendix E of the LLM.Int8() paper establishes it as a practical method for large scale quantized training, though it is commonly regarded as a PTQ technique.
- Weakness 3: Thank you very much for this very insightful question about why quantized training is even warranted. Indeed, we believe that this is very much an open question. However, if the current scaling trend persists for more years, it is likely that training runs in the future will cost upwards of 100 million US dollars. Therefore, any reduction in training time could be the difference between whether a run is feasible or not. Given the potential savings, we believe institutions will (and perhaps already are) taking the risk. | Summary: This paper proposed methods for accelerating and stabilizing CLIP training. To accelerate training, the authors proposed the SwitchBack method, which quantizes the precision to int8 for the first two matrix multiplies but switches back to higher precision for the weight gradient. The proposed method speeds up 13-25% CLIP ViT-Huge training. To stabilize training, the authors proposed to combine the AdamW and the Adafactor optimizers and call the proposed new optimizer StableAdamW. This is observed from the experiments that when the squared gradients become underestimated by AdamW's second-moment estimator, loss spikes will occur after several iterations.
Strengths: The motivation for the ideas is very clear. To train large-scale language-vision models, speed is very critical. Training stability is also very important because we need to train with large-scale data without performance degradation. These two factors are essential for training large-scale language-vision models.
The authors proposed the SwitchBack method that quantizes the precision to int8 for fast matrix multiplication and then transforms back to the original floating point precision for the weight gradient. The authors provide PyTorch torch illustrating this process in Algorithm 1. Experiments show that the proposed method could speed up 13-25% of CLIP ViT-Huge training.
The authors proposed the StableAdamW optimizer for training large-scale language-vision models. The authors observed that if the squared gradients are underestimated by AdamW's second-moment estimator, then the loss spike would occur in the subsequent few iterations. The authors proposed to combine the AdamW and the Adafactor to get their proposed StableAdamW optimizer. Experimental results show that StableAdamW stabilizes training and helps the model achieve higher zero-shot performance than the AdamW optimizer and the AdamW optimizer with gradient clipping.
Weaknesses: The experiments are only conducted for the CLIP training. It's unknown whether the proposed SwitchBack and StableAdamW will work for other language-vision pretraining models such as BEiT v2, BEiT v3, or BLIP. Also, since CLIP only uses contrastive loss, it's not clear whether the proposed methods will work for other losses.
It's unclear whether the authors used the global contrastive loss computed across all GPUs or the local contrastive loss computed on each GPU. The global contrastive loss also affects speed, accuracy, and stability.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: $x_n$ should be $x_b$ in Eq. 1?
Why only examine RMSt for the visual transformer patch embedding layer, visual.conv1.weight?
Why not compare SwitchBack with the quantization method in [1]?
Reference:
[1] Bai et al., Towards Efficient Post-training Quantization of Pre-trained Language Models, NeurIPS 2022.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Mentioned in Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your insightful review and careful attention to details. We hope the following answers your questions.
- Question1: Regarding the typo in equation 1. Thanks very much for this catch, this is indeed a typo.
- Question 2: We find that the patch embedding layer is the source of instability, which also aligns with the findings of [1]. When we create the same plot for other layers, we do not observe the predictive relationship which we observe with the patch embedding. More information on this point is provided in L741 of the supplementary material, in particular Figure 20 of the supplementary material examines RMS and loss for a non-embedding layer.
- Question 3: Thank you for highlighting [2], we will revise to include this important reference. However, [2] focuses on post-training quantization, i.e., quantizing after training, while our paper focuses on training in low precision. Training in low precision is a much more difficult problem.
- Weakness 1: We agree that focusing on CLIP is a limitation, but also believe that large language-vision models are increasingly important as they underlie generative models and zero-shot methods, and that these models present a unique set of challenges. We will revise the paper to make this shortcoming more apparent.
- Weakness 2: We use the global contrastive loss, and will revise to include this information.
[1] https://arxiv.org/abs/2104.02057
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank the authors for the reply! Most of my questions are addressed.
---
Reply to Comment 1.1.1:
Comment: Thank you again for the constructive comments. Please don't hesitate to ask further questions if they come up. Also, if you feel it is warranted after our response we would of course appreciate if you consider raising your score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors introduce new methods for accelerating and stabilizing training for large language-vision models. For acceleration, SwitchBack is proposed, which use high-precision for backwardd pass to compute the gradients for the weights. For stability, the introduce an
AdamW-Adafactor hybrid (StableAdamW).
Strengths: 1. This work aims to accelerate and stabilize the training of language-vision models, which is particularly important for large-scale language-vision applications.
2. The analysis of relationship between loss spike and AdamW second moment is interesting.
Weaknesses: 1. My main concern is about the novelty. The idea of using high-precision for backward weight gradients is studied by previous works such as the Gradients Bifurcation methd [1]. Float8 training are also widely studied. For the proposed StableAdamW, I didn't see the essential difference with AdaFactor, despite the explanation around line 255.
2. It seems that the proposed methods are not designed particularly for language-vision models. It is better to compare with existing methods on traditional CV or NLP tasks.
minor issues:
line 331, divides the learning rate for iteration $t$ by $max(RMS_t,1)$.
line 154, why constrains the float8 with in -1 and 1 ? float8cast(x/absmax(x))
[1] Scalable Methods for 8-bit Training of Neural Networks.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See the weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 1 poor
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review, we hope this rebuttal addresses your concerns.
**Novelty**
We thank you for highlighting the gradient bifurcation method, and we will update the paper to include this important reference. As prior work has observed (e.g., [1]), quantization becomes more difficult at scale. Therefore, we believe that our experiments in this large scale setting, and for large vision-language models in particular, are valuable for the community. We emphasize that the largest vision models that have been studied before have been an order of magnitude smaller. In addition, we show that the technique can be simplified for float8 vs. int8.
Moreover, we do not claim methodological novelty over AdaFactor, but instead aim to draw the community's attention to an important feature of AdaFactor that is often overlooked, and show that this feature can be effectively transferred to Adam. AdaFactor is decreasing in popularity as it has been shown to underperform Adam at scale [2]. However, our results indicate that this may be due to factored moments and not some of AdaFactor’s other innovations which can effectively be transferred to Adam. Finally, our work is novel as we establish a predictive relationship for loss spikes.
**Why CLIP?**
We focus on the contrastive vision-language pre-training setting because we believe that it is of increasing importance as it underlies approaches from generative models to zero-shot classifiers. While the results may be more general, contrastive approaches require a very large batch size. As we discuss in Section D.3 of the supplementary material (L686), we believe that this distinction explains why LLM.int8() underperforms the bfloat16 baseline in the setting that we consider.
**Why constrain the float8 to [-1, 1]?**
We apologize for our lack of clarity – this is a typo. After converting to [-1, 1] range we multiply by the max value in float8.
[1] https://arxiv.org/abs/2208.07339
[2] https://arxiv.org/abs/2112.11446
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank the authors for the response! My main concern is still the lack of novelty. The proposed SwitchBack is previously studied in Gradients Bifurcation, the proposed StableAdamW has no methodological novelty over AdaFactor as responsed by the authors. I understand the authors' statement about drawing the community's attention to the useful methods studied in this paper, however, I think the contribution is not enough to get accepted.
I think it it critical for a training method to generalize well. Otherwise, as raised by Reviewer rd6a, 'if the training fails, it's unclear if the failure stems from precision issues or model issues'. Thus, I think it is critical to test the generalization ability of the proposed method to a wide range of tasks and models.
I have also read other reviewers' comments, but I still tend to reject this paper based on the above two concerns, which are not well addressed by the authors' response.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for engaging and for your feedback, while we respectfully disagree we sincerely appreciate your time. We do highlight that even despite the contributions for which you have novelty concerns, there are still two remaining contributions which we believe to be important: a) We demonstrate successful float8 training with quantized gradients, weights, and activations by using zero-init layer-scale (Figure 2). b) We establish a predictive relationship between RMS spikes in the patch embedding layer and loss spikes (Figure 4 and Figures 15-20 of the appendix). Moreover, with respect to the downstream task concern we note that [1] (e.g., Figure 8) and [2] have observed a strong correlation between zero-shot and downstream task performance.
[1] https://arxiv.org/abs/2103.00020
[2] https://arxiv.org/abs/2212.07143 | null | null | null | null | null | null |
HASSOD: Hierarchical Adaptive Self-Supervised Object Detection | Accept (poster) | Summary: The paper proposes a self-supervised object detection method (in fact, it can also do instance segmentation). This is based on an idea of bottom-up merge by strong feature representation like from DINO feature. The image patch grouping is hierarchical because of setting different stopping thresholds. This makes the model interpretable. Overall, the idea is simple and straightforward. The model can be trained by teacher-student framework. Experiments show better results than recent work MaskCut.
Strengths: - the bottom-up merging method to solve the self-supervised object detection might be new.
- Figure 1 and 2 are good to understand the method.
- experiment results look good.
Weaknesses: - The writing is bad. Many critical details are omitted which makes it super hard to under the method.
- Some inconsistencies in the paper, needs to clarify.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Some critical details are missing in the paper and they need to be clarified. 1) What is learned, the whole DINO backbone or just a head network? 2) For the Mean-Teacher, what are the losses used to supervise the student networks training?
The grouping process in section 3 is iterative and seems to be too slow. What is the time complexity?
Line 138, how to ensemble?
Line 195, in section 3, looks like only the feature representation is used, why train a Cascade mask rcnn here? Obviously, Cascade mask rcnn is not self-supervised.
Line 233 and 234 are inconsistent with section 4.2. What exactly Objects365, Lvis, SAM are for evaluation or training?
Lvis should have a LVIS-rare benchmark.
Whole, part, subpart should be given rigorous definitions.
For the limitations 2) and 3) in the second paragraph, I don't understand how they are solved after reading the whole paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The discussion at the end of the paper is good.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the detailed feedback you provided for our submission. We are encouraged by your acknowledgement that our method is “new”, figures are “good to understand”, and “experiment results look good”. We provide the following clarifications in response to your concerns:
1. Clarification on Learned Network and Mean Teacher
- We learn the weights in both the DINO backbone and the detection head network. To be precise, our approach adopts a two-stage discover-and-learn procedure, and all weight learning happens in the second stage. Please see more details in the general response.
- In Mean Teacher, the student network “receives supervision from two sources simultaneously” (Line 172): One source is the teacher’s predictions, and the other is the pseudo-labels generated in the first stage. The training losses are the *standard object detection losses*, including classification loss, bounding box regression loss, and mask prediction loss. Formally, the training loss for the student is $L_\text{student}=\alpha_\text{teacher}L_\text{det}(\tilde y_\text{student}, \tilde y_\text{teacher})+\alpha_\text{pseudo}L_\text{det}(\tilde y_\text{student}, y_\text{pseudo})$, where $\tilde y_\text{student}$ is the student's prediction for the given image, $\tilde y_\text{teacher}$ is the teacher's prediction, $y_\text{pseudo}$ is the pseudo-label, and $L_\text{det}$ is the standard detection loss mentioned above. $\alpha_\text{teacher}, \alpha_\text{pseudo}$ are loss weights dynamically adjusted during training, where $\alpha_\text{teacher}$ linearly increases while $\alpha_\text{pseudo}$ is linearly decreasing, as introduced in Line 183-186.
2. Time Complexity for Grouping Process
Suppose we have $n$ patches in the beginning, then the time complexity of the grouping process is $O(n\log n)$: We have at most $n$ merging steps before stopping. Each merging step requires retrieving the most similar pair of adjacent regions. The collection of adjacent pairs has size at most $O(n)$, and each operation requires $O(\log n)$ time if this collection is organized as a balanced binary tree. Therefore, the grouping process has time complexity $O(n\log n)$. In our practice, the input image has resolution $480\times 480$, so $n=60\times 60=3600$. This time complexity is affordable. For more statistics, please check the general response.
3. Clarification on Ensemble
In Line 138, “ensemble” means to aggregate all the pseudo-labels generated from different merging thresholds $\theta_i^\text{merge}$.
4. Cascade Mask R-CNN Training
As explained above, our method takes a two-stage procedure. We train a Cascade Mask R-CNN in the second stage, using the pseudo-labels generated in the first stage. Since the pseudo-labels are generated from self-supervised features, our approach is still *fully self-supervised*.
5. Ambiguity in Dataset Usage
Thank you for pointing out the ambiguity. We should parse the sentence as “We evaluate ... (Cascade Mask R-CNN trained by HASSOD) on Objects365, LVIS, and SA-1B”, rather than “We evaluate ... Cascade Mask R-CNN (trained by HASSOD on Objects365, LVIS, and SA-1B)”.
To clarify, the Cascade Mask R-CNN learned by HASSOD is exclusively trained on MS-COCO images. The sentence in question means that after training, we evaluate the performance of this model on the Objects365, LVIS, and SA-1B datasets in a *zero-shot* manner. The model is not further trained on these datasets.
6. LVIS-rare Benchmark
Our main objective is to identify **every object** present in scene images without distinguishing based on classes. We train and evaluate our model as a class-agnostic detector, aligning with the methodology of prior research like CutLER, as mentioned in Line 210-212 of our paper. Consequently, when evaluating on LVIS, it is necessary for us to consider **objects from all categories**, irrespective of their categories as rare, common, or frequent.
7. Rigorous Definitions of Whole, Part, and Subpart
We will revise our paper based on these concepts:
- Whole: The entirety of an object, encompassing all its parts and subparts. In our method, a “whole” object is the largest coherent grouping of pixels that the algorithm identifies as an entity. For instance, when considering an image of a bicycle, the “whole” object would refer to the entire bicycle, including all its parts such as wheels, frame, and handle.
- Part: A smaller component of the “whole” object that has consistent features but does not encompass the entire object. In the bicycle example, the wheel or the handle would be considered a “part” of the bicycle.
- Subpart: An even smaller component of a “part”. It provides finer granularity in object segmentation. In the bicycle example, if we consider the wheel as a “part”, then the spokes and tire of the wheel could be considered “subparts”.
8. Overcoming Previous Limitations
- Limitation 2 - Narrow object coverage: Many prior methods, including TokenCut and MaskCut from CutLER, can only detect a *pre-defined number of objects* in images due to their algorithm design. However, our hierarchical adaptive clustering (Section 3.1) can recognize a broader range of objects, with the *number adaptively based on image content*. This approach markedly improves object detection, surpassing prior methods. As shown in Table 1, we boost Average Recall (AR) for self-supervised methods on SA-1B from 17.0 to 26.0.
- Limitation 3 - Inefficiency: Prior methods like FreeSOLO and CutLER involve *multiple rounds of self-training*, increasing computational demands. For the first time, we introduced the Mean Teacher paradigm to our framework (Section 3.3), which allows a teacher model *continually produce pseudo-labels*, with a student model *simultaneously learning*. Compared to traditional multi-round self-training, our approach is smooth and efficient. Our method takes only 1/12 of the training iterations (Line 205) to outperform CutLER.
---
Rebuttal Comment 1.1:
Title: Follow-up on LVIS Metrics
Comment: As clarified in our previous response, our main objective is to identify **all objects from all categories** present in scene images without distinguishing based on classes, so we did not evaluate LVIS-rare/common/frequent metrics, which are typically used in a different task of class-aware object detection.
While we do believe that the full LVIS is more suitable as the evaluation benchmark for our class-agnostic detection task, per the reviewer’s request, we have evaluated our model with the LVIS-rare/common/frequent metrics, and compared it against the prior state-of-the-art method CutLER. The results are summarized in the following table. Again, we demonstrate improved performance in every metric, which is consistent with our previous results. In contrast to class-aware object detection, we do not observe a significant gap between the detection performance for rare/common/frequent objects, since our method is developed in a **self-supervised and class-agnostic** manner.
| | AR$_r$ | AR$_c$ | AR$_f$ | AR |
| ------------- | ---- | ---- | ---- | ---- |
| CutLER | 17.9 | 22.2 | 20.2 | 20.2 |
| HASSOD (Ours) | 20.8 | 23.9 | 22.6 | 22.5 |
---
Reply to Comment 1.1.1:
Comment: Hope this message finds you well. This is just a friendly reminder of our recent rebuttal and follow-up responses, in which we have addressed your valuable feedback on our work.
Could you please take a moment to review our responses? If any issues remain unresolved or further clarification is needed, we are more than willing to continue the discussion.
Thank you once again for your time and expertise. We look forward to hearing from you soon.
---
Rebuttal 2:
Comment: Sorry for the delay. Thanks for the effort of the authors. Most of my concerns have been solved. I increased the score.
---
Rebuttal Comment 2.1:
Comment: We are glad that our responses have addressed your concerns. We appreciate your helpful feedback and reconsideration of the rating. | Summary: The authors propose a self-supervised object detection approach by employing a self-supervised pre-trained model (i.e., DINO) to hierarchically and adaptively group regions into object masks using multiple pre-defined thresholds based on cosine similarity in the feature space. The authors then adapt the Mean Teacher framework to train a student object detector with the initial pseudo-labels from the clustering process as well as the progressively refined pseudo-labels from the teacher model. Extensive experiments demonstrate the superiority of the proposed approach over existing self-supervised object detection and instance segmentation methods.
Strengths: - The experiments are extensive and the results are promising.
- The paper is generally well-written and easy to follow.
Weaknesses: - The authors use a set of pre-defined thresholds (i.e., {0.1, 0.2, 0.4}) to merge region features. However, considering different datasets may have different distributions, such COCO-tuned hyper-parameters may not be suitable when merging region features for other datasets. Thus, the authors have to select the optimal thresholds for each dataset when performing clustering, which limits the practical applications of the proposed method to some extent.
- The authors use some post-processing techniques like CRF to refine the masks. I am curious about the mask quality without such post-processing. Will the performance be dropped significantly?
- Although the proposed method can segment an image with hierarchy, it seems that the mask quality is somewhat inferior to CutLER according to the qualitative results as shown in the paper.
- The authors choose DINO with ViT-B/8 model, i.e., each spatial element in the feature map corresponds to a 8x8 patch in the original image. What is the effect of the patch size? It seems that the patch size tends to affect the performance significantly and should also be ablated.
- The authors use Cascade Mask R-CNN to verify the effeteness of the proposed method. What about other kinds of detectors? It would be better to provide results based on more detectors to verify the generality of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See questions mentioned above. I am currently leaning towards borderline reject and hope the authors could address my concerns during the rebuttal.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have well addressed the limitations and the broader impacts in Sec. 5, which looks good to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the detailed feedback you provided for our submission. We are encouraged by your acknowledgement that our “experiments are extensive”, “results are promising”, and “paper is well-written and easy to follow”. We provide the following clarifications in response to your concerns:
1. Generalizability of Pre-Defined Thresholds
Please check our general response.
2. Mask Quality Without CRF
The use of the CRF post-processing is necessary for acquiring precise edges in pseudo-labeled masks. When generating initial pseudo-labels, we employ ViT-B/8 to extract features at the patch-level, with each patch being of the size 8x8. Absent CRF post-processing, the masks would inherently be composed of these 8x8 patches. As a consequence, the boundaries of such masks would resemble a jigsaw puzzle and would not be consistent with the real boundaries of objects. Meanwhile, we adopt CRF post-processing following the foundational segmentation work DeepLab [Ref1], as well as prior methods on self-supervised object detection including TokenCut and CutLER. For example, CRF post-processing has been shown necessary in Figure 4 of TokenCut [Ref2].
3. Inferior Mask Quality
It is true that there are some instances where the quality of masks generated by our method may seem inferior when compared directly to CutLER. There are a couple of reasons for this:
- Training duration: As indicated in Section 4.1, our model was trained for a considerably shorter duration compared to CutLER – specifically only **1/12** of CutLER's total iterations. This was mainly due to our computational resource constraints. We believe that with prolonged training, our model has the potential to produce higher quality masks.
- Focus on overall performance: It is also crucial to highlight that our primary objective was to improve the overall detection and segmentation performance by increasing the number of detected objects, especially in challenging scenarios. While we also strive for mask perfection, our main focus is still on improving the overall average recall and precision. As evidenced in Table 1, our approach does demonstrate superior overall performance in instance segmentation, which we consider a significant achievement.
4. Impact of Patch Size in DINO ViT-B/8
We appreciate your query regarding the ViT patch size and have produced the following table as additional ablation study. For fair comparison, we use 480x480 input resolution for ViT-B/8 and 960x960 for ViT-B/16 so that they have the same number of patches. With the same $\theta^\text{merge}$, ViT-B/16 leads to slightly fewer labels per image, but the quality is significantly worse than ViT-B/8, especially for small and medium objects. Therefore, we apply ViT-B/8 in our experiments for its localized visual features and subsequent high-quality pseudo-labels.
| DINO Backbone | $\theta^\text{merge}$ | Labels per Image | AR | AR$_S$ | AR$_M$ | AR$_L$ | AP |
|---|---|---|---|---|---|---|---|
| ViT-B/8 | 0.1 | 2.58 | 4.1 | 0.6 | 5.1 | 15.6 | 1.8 |
| ViT-B/16 | 0.1 | 1.97 | 3.0 | 0.3 | 2.3 | 14.6 | 1.1 |
| ViT-B/8 | 0.2 | 4.20 | 5.3 | 0.9 | 7.0 | 18.4 | 1.8 |
| ViT-B/16 | 0.2 | 3.19 | 3.8 | 0.4 | 3.3 | 17.4 | 1.2 |
| ViT-B/8 | 0.4 | 11.61| 7.8 | 1.7 | 12.1| 22.1 | 1.3 |
| ViT-B/16 | 0.4 | 10.15| 5.7 | 0.9 | 6.6 | 21.7 | 1.3 |
5. Use of Cascade Mask R-CNN
- We chose Cascade Mask R-CNN to ensure **fair comparison with CutLER**. The same detector architecture enables a fair and direct comparison between our proposed method and CutLER. Otherwise, it would be unclear whether the performance gain came from the improved learning paradigm or a stronger detector architecture.
- We do acknowledge the importance of demonstrating the versatility of our method across different detectors. To this end, we are conducting experiments using our proposed method and Mask R-CNN, and we will share the results within the next week.
[Ref1] Chen et al. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. In TPAMI, 2017.
[Ref2] Wang et al. TokenCut: Self-supervised Transformers for Unsupervised Object Discovery using Normalized Cut. In CVPR, 2022.
---
Rebuttal Comment 1.1:
Title: Follow-up on Use of Cascade Mask R-CNN
Comment: As clarified in our previous response, we chose Cascade Mask R-CNN mainly to ensure **a fair comparison with CutLER**. Meanwhile, we do acknowledge the importance of demonstrating the versatility of our method across different detectors. To this end, we have trained a **Mask R-CNN** model using our proposed method. The following table compares LVIS detection performance of Mask R-CNN trained by CutLER and HASSOD. The results demonstrate that our method is indeed generalizable and consistently outperforms CutLER with different detector architectures. We will include the results in the revised version as well.
| | Box AR | Box AR$_S$ | Box AR$_M$ | Box AR$_L$ | Box AP | Mask AR | Mask AR$_S$ | Mask AR$_M$ | Mask AR$_L$ | Mask AP |
| ------------- | ------ | ------- | ------- | ------- | ------ | ------- | -------- | -------- | -------- | ------- |
| CutLER | 20.7 | 10.4 | 33.3 | 52.0 | 4.1 | 18.5 | 9.6 | 29.1 | 44.9 | 3.4 |
| HASSOD (Ours) | 23.8 | 13.5 | 38.3 | 50.9 | 4.3 | 21.5 | 11.8 | 35.1 | 46.6 | 4.1 |
In summary, while our primary focus was on Cascade Mask R-CNN for a consistent comparison with CutLER, our method is certainly not restricted to it and shows superior performance with other detectors as well.
---
Reply to Comment 1.1.1:
Comment: Hope this message finds you well. This is just a friendly reminder of our recent rebuttal and follow-up responses, in which we have addressed your valuable feedback on our work.
Could you please take a moment to review our responses? If any issues remain unresolved or further clarification is needed, we are more than willing to continue the discussion.
Thank you once again for your time and expertise. We look forward to hearing from you soon. | Summary: This goal of this paper is to improve the performance of self-supervised object detection by enhancing the detection pseudolabels. In addition to semantic masks provided as detection labels to an object detector, the authors utilize a rule-based approach to automatically generate a hierarchical structure of objects and provide it as targets to the detector. By training on hierarchical pseudolabels, the detector learns about the compositional structure of objects and therefore performs better. Additionally, the authors use a teacher-student network to train the detector instead of multi-round self-training approach. The proposed approach outperforms two other self-supervised detection approaches.
Strengths: - The authors tackle an important problem in perception; the hierarchical structure of objects. By providing object labels at multiple levels in a tree structure, the detector can learn patterns in compositional structures of objects (subparts - parts - whole), which helps improve the predictions. For example, a model can learn to condition a whole object prediction on the existence of its parts and subparts.
- This hierarchical structure is not only found in vision, but also other perceptual modalities and NLP. Therefore, understanding how to generate and utilize this structure can be helpful in other tasks and across other modalities.
- This method is completely self-supervised, based on features extracted from DINO trained on ImageNet. The improvement in detection performance comes at no supervision cost, only additional computational cost to generate the pseudolabels.
- The approach shows promising quantitative results, surpassing other self-supervised methods.
Weaknesses: - This approach, like many others, will always be upper bounded by the quality of the features and the robustness of the self-supervised representation learning approach used to generate them. The generation of labels depends on frozen features, so the detector will be sensitive to the quality of those features.
- Approaches that use hierarchical clustering lose representation quality as they naively average high dimension feature representations to merge clusters. Therefore these methods cannot cluster the object representations to generate class labels for training. This is similar to approaches like TW-FINCH [1] that takes the same route for event segmentation. This being said, I'm interested in seeing whether high-level representations of objects generated by averaging low-level pixel/patch representation would provide good matching to true class labels. An alternative would be to add a (non-)linear on the average-based representations to classify the objects.
- While there is no supervision cost to generate the extra hierarchical labels, there is computational cost in clustering and structure discovery. It is important to share the computational cost in order to determine whether this approach is practical and can be scaled up.
- I am not convinced that simple thresholding of the similarity metric provides accurate separation between the hierarchy levels. Using an arbitrary threshold will guarantee a tree structure but not necessarily a part-whole structure. The goal is not to find any tree structure whith parent masks covering children masks. These rules do not guarantee a part-whole structure. Each entity in the tree should be a coherent object (part-whole tree).
- One suggestion is to plot $\theta^{merge}$ against the number of objects in the image and detect regions with lowest slopes to be the thresholds. The intuition is that in these regions the model is less sensitive to changes in $\theta$, therefore there could be good separation between hierarchical levels at these thresholds.
- How do the authors decide on $\theta^{merge}$? Is it using the ablations in Table 2? Table 2 is generated using groundtruth masks. So if the decision on $\theta^{merge}$ is based on evaluating the masks with ground truth masks, this would be a significant flaw in the experimental protocol. It would violate the self-supervision claim. Usually researchers decide based on a small validation set, and even then it is still arguably supervised. This is an important point.
[1] Sarfraz, Saquib, et al. "Temporally-weighted hierarchical clustering for unsupervised action segmentation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - When using a single threshold to generate the masks, my understanding is that we should only get one level of objects. In other words, the output masks should not intersect with any other masks. How are the numbers in Table 2 generated when using a single $\theta^{merge}$ threshold? This approach should only be possible when having intersecting masks (i.e., ensemble).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: - Limitations about not aligning hierarchical levels with human perception are addressed. But it's missing limitations concerning the inability of this model to generate representations useful to classification of the detected boxes/masks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the detailed feedback you provided for our submission. We are encouraged by your acknowledgement that our method tackles “an important problem of hierarchical structure of objects”, is “completely self-supervised”, and shows “promising quantitative results”. We provide the following clarifications in response to your concerns:
1. Upper Bound of Self-supervised Representation
HASSOD is **not upper bounded** by such representations.
- Starting from existing self-supervised representations is a standard practice in the literature of self-supervised object detection. For instance, FreeSOLO utilizes pre-trained DenseCL representations, and the state-of-the-art method CutLER relies on DINO representations. Pre-trained self-supervised representations have been shown as a good starting point for self-supervised object detection. We are also following this paradigm.
- The performance of our method is **not upper bounded** by the constraints of the frozen self-supervised models. Two key insights in HASSOD help us break through the limits of DINO features by self-improvement: 1) We train a detection model to learn common objects and their hierarchical relations for *enhanced generalization* to unseen images. 2) We leverage Mean Teacher for continual self-enhancement, and gradually *minimize the negative impact of noisy initial pseudo-labels*.
- A close examination of Table 2 and Table 3 in our paper demonstrates that HASSOD is not upper bounded by DINO representations. Specifically, the quality of pseudo-labels generated by frozen DINO features can be quantified by the best average recall (AR) and average precision (AP) listed in Table 2 as 8.9 and 1.8, respectively. After training a detection model with HASSOD, these values are boosted to 22.4 and 6.3, respectively, as shown in Table 3.
2. Object Representations for Classification
- Firstly, our primary task is **class-agnostic** self-supervised object detection, established by prior work such as FreeSOLO and CutLER. Distinct from the traditional class-aware object detection tasks, this direction is motivated by numerous scenarios where the primary goal is not to recognize specific object classes but rather to capture objects within an image. As examples in robotic vision, a robot may need to identify and navigate around obstacles without classifying them.
- Meanwhile, we acknowledge the value in examining our extracted features for class-aware object detection. To this end, we tune a set of classification heads newly attached to the pre-trained Cascade Mask R-CNN model. The other modules are initialized from CutLER or HASSOD and frozen during the supervised fine-tuning on MS-COCO. The results of this classification-only tuning are shown in the table below. The model pre-trained by HASSOD achieves $2\times$ AR and AP in class-aware object detection and instance segmentation, suggesting that HASSOD has gained representations more helpful for classification during the self-supervised learning procedure, as compared with CutLER.
|Initialization|AR|AR$_S$|AR$_M$|AR$_L$|AP|
|---|---|---|---|---|---|
|CutLER|4.2|1.1|2.8|6.3|1.5|
|HASSOD (Ours)|**8.4**|**1.6**|**6.8**|**14.2**|**3.1**|
3. Computational Costs
Please check our general response.
4. Simple Thresholding for Accurate Hierarchy Levels
Your concerns over the use of thresholding for deriving hierarchical structures are valid. Indeed, using simple thresholding could potentially lead to deviations from the expected part-whole structure that we typically associate with objects. For instance, there might be instances where we generate a structure that is closer to an incomplete whole object rather than a distinct part.
Despite this concern, thresholding in conjunction with the DINO features leads to surprisingly coherent and semantically meaningful results. Here are our observations:
- Coherence of pseudo-labels: From our extensive experiments and observations, the pseudo-labels generated are **predominantly coherent**, displaying uniformity in color and edge characteristics. We rarely find significant heterogeneity in the pseudo-labels. In Figure 2, we showed a real example from our pseudo-labels. This essentially implies that the hierarchical structures produced by thresholding and DINO are *surprisingly consistent* with the part-whole structures that we, as humans, commonly perceive.
- Robustness of DINO Features: The quality and efficiency of the hierarchical structures are highly attributed to the DINO features. Prior work [Ref1, 2, 3] has demonstrated DINO's ability in identifying **whole objects**. Impressively, DINO features are robust with respect to various types of thresholds. Our contribution is to extend this understanding further by not only segmenting whole objects but also extracting **meaningful and coherent parts of objects**.
5. Deciding and Plotting Thresholds $\theta^\text{merge}$
Please check our general response.
6. Table 2 with Single $\theta^\text{merge}$ Threshold
In Table 2, for the rows with one single $\theta^\text{merge}$ threshold, as the reviewer correctly pointed out, we would produce a collection of pseudo-labels, in which each mask does not intersect with others. However, Table 2 aims to evaluate the quality of these generated *initial pseudo-labels* as individual entities. The evaluation does not involve the hierarchical relationships between the labels. Instead, it provides an assessment of how well these pseudo-labels align with true object masks in the image based on the selected threshold. The hierarchical structure of masks is learned and assessed in the subsequent stage of HASSOD.
[Ref1] Siméoni et al. Unsupervised Object Localization: Observing the Background to Discover Objects. In CVPR, 2023.
[Ref2] Wang et al. Cut and Learn for Unsupervised Object Detection and Instance Segmentation. In CVPR, 2023.
[Ref3] Hamilton et al. Unsupervised Semantic Segmentation by Distilling Feature Correspondences. In ICLR, 2022.
---
Rebuttal Comment 1.1:
Title: Comment on the Rebuttal
Comment: Thank you for the rebuttal. I have read your answers to my questions and the other reviews.
I am still not convinced that the performance is not upper-bounded by the **quality** of the frozen features you use for hierarchy discovery. My understanding is that the better the frozen features, the better the separation between hierarchical levels (as you mentioned in answer 4 about DINO features coherence). A better hierarchy improves object detection performance, because this is the premise of the paper. Therefore the improvement in object detection performance is always bounded by the quality of frozen features.
As I mentioned in the review, I understand this procedure is similar to other methods that use pretrained frozen features and will lead to object detection not improving beyond a certain level, because the features are frozen. Showing that object detection performance trained on the hierarchy results in higher performance than objects detected from frozen features seems irrelevant to the concern I raised. If you cannot enhance the frozen features quality, you cannot improve the hierarchy quality, therefore cannot improve the object detection performance. Is my understanding correct, or am I missing something?
Thank you!
---
Reply to Comment 1.1.1:
Title: Follow-up Response
Comment: Thank you for your insightful comment. We would like to further clarify our approach in light of your remaining concern about the pretrained features in the following aspects:
1. Focus on Object Detection, Not Representation Learning
We would like to emphasize that the core of our research is on self-supervised *object detection*, rather than on improving self-supervised *representations* themselves. As the reviewer mentioned, consistent with previous state-of-the-art work including CutLER, our approach leverages existing pre-trained features to perform the object detection task. We have demonstrated that our approach HASSOD, while using the same pre-trained features by DINO, leads to enhanced results compared to prior works (see Table 1), highlighting the effectiveness of our specific contributions to self-supervised object detection.
2. Two Avenues for Performance Improvement
Enhancing the frozen feature quality is *not the only way* to improve the object detection performance. In fact, from the perspective of pre-trained features, we believe that there are two ways to improve self-supervised object detection performance:
- Approach 1: As you have correctly suggested, and we fully agree, one possible way for performance enhancement is through *better pre-trained features*. Currently, DINO has proven to be the *most effective pre-trained features for object discovery* and has been utilized by state-of-the-art methods such as CutLER. Future advances in self-supervised representation learning could yield even better pre-trained features, and using such improved features would further improve detection performance. However, developing more advanced self-supervised representations is beyond the scope of this paper.
- Approach 2: Contrary to relying solely on frozen features, our work showcases the ability to **refine these features through the object detection training process**. We find this a critical contribution to highlight. In both HASSOD and existing self-supervised object detection work, the initial pseudo-labels generated by the frozen features could be noisy and coarse. For example, when the boundary between a foreground object and its background is vague, the frozen features are less distinguishable between the foreground and background, thus leading to inaccurate mask edges or even missed objects. However, after observing more similar instances in the whole dataset, the detector can gradually learn to fix errors in pseudo-labels, and detect such objects accurately. During this procedure, the pre-trained features are also adapted in an end-to-end manner for detecting more challenging objects. Therefore, we can achieve higher performance by training a detector than directly deriving objects from frozen features – this result was pointed out in our previous response. We hope our explanation here also clarifies the reviewer’s question regarding how this result is relevant to addressing the reviewer’s concern.
3. Refinement Beyond Frozen Features
It is important to clarify that while we start our process with pre-trained DINO features, these features are **not permanently frozen** in our approach. During the training of our object detection model, we do fine-tune the DINO backbone. This fine-tuning, coupled with self-correction in detector training and the Mean Teacher strategy, enables us to adapt and potentially improve the DINO representations specifically for the object detection task.
We provide a table below to compare the quality of objects discovered directly using the *frozen DINO ResNet-50 backbone* versus those discovered using the *fine-tuned backbone in our trained detector*. We use our hierarchical adaptive clustering strategy and standard post-processing to discover objects. Please note that this table is a direct comparison on the feature quality between the two backbones, rather than the final detection results. This comparison explicitly illustrates our ability to *substantially enhance* the pre-trained representations for the task of object discovery.
| Backbone | AR | AR$_S$ | AR$_M$ | AR$_L$ |
| ------------------------------------------ | ---- | ---- | ---- | ---- |
| Frozen DINO ResNet-50 | 1.8 | 2.1 | 1.3 | 2.0 |
| Fine-tuned DINO ResNet-50 in HASSOD (Ours) | 3.4 | 3.9 | 3.0 | 2.2 |
In summary, our work not only leverages pre-trained representations, but also refines and adapts them to the specific task of self-supervised object detection. | Summary: This paper proposes a hierarchical Adaptive Self-Supervised Object Detection (HASSOD), an approach that learns to detect objects and understand their compositions without human supervision.
HASSOD employs a hierarchical adaptive clustering strategy to group regions into object masks based on self-supervised visual representations, adaptively determining the number of objects per image.
HASSOD identifies the hierarchical levels of objects in terms of compositionality, by analyzing coverage
relations between masks and constructing tree structures.
The proposed method adapts the Mean Teacher framework from semi-supervised learning, which leads to a smoother and more efficient training process.
The Mask AR is improved from 20.2 to 22.5 on LVIS, and from 17.0 to 26.0 on SA-1B.
Strengths: Good results of the process, exhibited in Fig. 3.
Many quantitative comparisons in Tables 1-3.
Nice to read the overall paper.
Good to see the reference of the vital work of [7], even after more than a decade.
Appendix Doc highlights the importance of a better metric than AR.
Weaknesses:
Lot of experiments, results, but no analytics. Not even a single equation, cost function, hyper-parameter set mentioned.
Adaptation of mean-teacher model, still makes it a self-supervised framework.,
The overall designed architecture is explained with examples of images, in a diagram in fig. 2.
A flowchart would still be better.
Which part requires CNN-based learning, which part requires shallow learning (if any) or heuristics is
not so clear (except a bit in Sec. 4.1, a small 2nd para).
Only in the Appendix -
The platform/environment/resources used for implementation are specified.
The computational time required for each sub-stage of the process is also not mentioned,
including the inference time. This is very important for practitioners.
Few failure cases should have been highlighted with causes/reasons; Appendix is not thorough on this too.
Color labels used for inference vs bounding box, often causes confusion - the color overlap for anyone to clearly understand.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: If this proposed work is inline with self-supervision,
what are the equivalent/compatible stages of -
pretext learning using base data with no labels;
fine-tuning with support set;
target set for inference on downstream task?
A table specifying the dataset distributions (although names given) and stages exploiting the same
for self-supervision should have been given.
Any scope of few-shot learning or meta-learning framework in your proposed approach?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Lack of analytics ;
NO ablation studies with cost functions;
Lack of mapping to standard Self-supervision paradigm, is a concern.
Object Detection in complex cases of overlap of objects, background clutter not provided.
I think its high time that researchers also look at identification/detection in camouflage (a personal opinion).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the detailed feedback you provided for our submission. We are encouraged by your acknowledgement of our “good results”, “quantitative comparisons”, and overall writing. We provide the following clarifications in response to your concerns:
1. Lack of Analytics and Ablation Study Regarding Cost Functions and Hyper-parameters
- Our central contribution is a **new learning paradigm**, rather than innovating on cost functions. We adopted a *minimalistic* design, using standard cost functions rooted in established literature. Most loss functions and hyper-parameters are inherited from previous works.
- To demonstrate the standard practices in our design, we can look into the two-stage learning of HASSOD (see more details in the general response):
- Stage 1: We apply the hierarchical adaptive clustering strategy, leveraging the DINO representations to derive initial pseudo-labels. The similarity measure in this process, as described in Line 118, involves computing the *pairwise cosine similarity* between adjacent region features: $\frac{x^T y}{\|\|x\|\|_2\|\|y\|\|_2}$. This choice of metric is well-grounded in the self-supervised learning literature such as SimCLR and MoCo.
- Stage 2: This stage focuses on training our object detection and instance segmentation model Cascade Mask R-CNN. We optimize *conventional detection and segmentation losses*, including 1) the foreground/background classification loss, 2) bounding-box regression loss, and 3) mask prediction loss, following Mask R-CNN and Cascade R-CNN.
- Presentation choices: In our initial submission, we consciously chose to minimize the use of equations, especially when the textual description was sufficient. Our goal was to maintain the paper's readability and prevent readers from being overwhelmed by equations. However, in light of your feedback, we will revise our paper with a more explicit introduction to the computational procedures, backed by relevant equations and analytics.
- Ablation study clarification: Since most of our chosen loss functions and hyper-parameters are rooted in standard practice, we found it unnecessary to ablate these specific choices. Meanwhile, we had ablation studies for **our novel designs**: hierarchical level prediction (Section 3.2) and mean teacher training with an adaptive target (Section 3.3). The details are in Table 3.
2. Self-Supervised Nature with Mean Teacher
We designed our method to be fully self-supervised relying on no labeled data, which we perceive as *a primary advantage*. We would be grateful for additional insights if there are specific concerns about our self-supervised strategy.
3. Figure 2 and Clarification on Components Involving CNN Learning
We have revised this figure and included it as Figure R1 in the rebuttal PDF. As mentioned in the general response, HASSOD is a two-stage *discover-and-learn* approach. CNN-based learning happens in the second stage.
4. Computational Details
Please check our general response.
5. Lack of Failure Case Analysis
We appreciate your suggestion and have included some failure case analysis in Figure R4 in the rebuttal PDF.
6. Overlap of Colored Masks and Bounding Boxes During Visualization
We followed standard visualization practices for instance segmentation as Mask R-CNN. The perceived clutter in our figures stemmed from depicting challenging cases with multiple objects in a scene. Our hierarchical predictions also resulted in overlapped masks. Based on your suggestion, we will omit colored bounding boxes and only show segmentation masks for better visual clarity. As an example, please check the failure case visualization mentioned above.
7. Alignment with Standard Stages of Self-Supervision Paradigm
- First, it is essential to note that our work does not delve into self-supervised **representation** learning, which is merely a subset of the broader self-supervised learning domain. Our primary focus is on self-supervised **object detection**, an inherently significant task in its own right. This task has a wide range of potential applications such as robotic vision. When contextualizing our work within the existing literature, our method aligns with the *discover-and-learn* paradigm by prior work such as FreeSOLO and CutLER.
- The goal of our self-supervised learning task sets it apart from representation learning methods like SimCLR or MoCo. However, for the sake of the best understanding, the training of our model using the pseudo-labels produced by our clustering can be considered as the “pre-text” task, though the task goals are not generated on-the-fly. Moreover, our work does not involve any “fine-tuning” or “linear probing” on ground-truth labels, as our model intrinsically produces object-level bounding boxes and masks. Regarding “downstream evaluation”, we assess our model's performance on unseen datasets, like SA-1B, in a zero-shot setting.
8. Dataset Specifics for Self-Supervision
Throughout our self-supervised learning process, we solely used the `train` and `unlabeled` splits of MS-COCO, which contains 0.24 million images. This was detailed in Section 4.1.
9. Incorporation of Few-Shot Learning or Meta-Learning
While few-shot and meta-learning are valuable, our current research centers on fully self-supervised object detection. Integrating these techniques would be an interesting future exploration.
10. Object Detection in Complex Scenarios
Indeed, our model has been tested in challenging cases with object overlap and cluttered backgrounds. Please refer to our supplementary material, particularly Figure 6 (first row) and Figure 7 (last two rows). These illustrations demonstrate the advantage of our model over CutLER in complex scenarios.
---
Rebuttal Comment 1.1:
Title: Follow-up on Detection in Camouflage
Comment: Thank you for highlighting Camouflage Object Detection (COD) [Ref 1]. COD presents a unique set of challenges, given the intrinsic similarities between the target and its environment. Currently, the best approaches to COD [Ref 1, 2] are based on high-quality human annotations and supervised learning. Given this context, utilizing self-supervised learning may not be immediately suitable. Exploring the overlap of self-supervised learning and COD is indeed an interesting future avenue.
[Ref1] Deng-Ping Fan, Ge-Peng Ji, Guolei Sun, Ming-Ming Cheng, Jianbing Shen, Ling Shao. Camouflaged Object Detection. In CVPR, 2020.
[Ref2] Deng-Ping Fan, Ge-Peng Ji, Ming-Ming Cheng, Ling Shao. Concealed Object Detection. In TPAMI, 2022.
---
Reply to Comment 1.1.1:
Comment: Hope this message finds you well. This is just a friendly reminder of our recent rebuttal and follow-up responses, in which we have addressed your valuable feedback on our work.
Could you please take a moment to review our responses? If any issues remain unresolved or further clarification is needed, we are more than willing to continue the discussion.
Thank you once again for your time and expertise. We look forward to hearing from you soon. | Rebuttal 1:
Rebuttal: In this response, we provide clarification on the common questions and concerns raised by the reviewers.
1. Clarification on Learning Procedure (Reviewers s64Y, wiMX)
Overall, HASSOD adopts a two-stage *discover-and-learn* approach, as illustrated in Figure R2 in the rebuttal PDF. This two-stage approach is not only intuitive but also a standard practice in the self-supervised object detection domain, as evidenced by literature such as FreeSOLO and CutLER.
- Stage 1 - Initial pseudo-label discovery: We employ our hierarchical adaptive clustering strategy (Section 3.1) to derive initial pseudo-labels. This process is based on a *frozen* DINO model, and it does *not* require learning any parameters.
- Stage 2 - Object detector learning: We *train a Cascade Mask R-CNN model* using the pseudo-labels from the first stage, enhanced by our hierarchical level prediction (Section 3.2) and Mean Teacher self-training (Section 3.3). This model has learned to detect and segment objects, offering *enhanced generalization* to images it has not seen during training, since it can learn from common objects and their relations in different training images.
2. Computational Costs (Reviewers s64Y, p4z2)
We apologize for any oversight in clearly presenting the computational platform, resources, and processing times within the main body of the paper. While we did mention some details regarding the *training data and iterations* in Section 4.1 and provided an extended discussion on the *computational platform* in Section F of the supplementary material, we understand the necessity for more comprehensive information. To address this, we have prepared the following table regarding the computation costs in pseudo-label generation.
|Step|Time Cost (sec/image)|Workers|Parallelized Cost (sec/image)|
|---|---|---|---|
|Merge and Post-process|11.7|8|1.46|
|Ensemble and Split|2.1|16|0.13|
|Total|13.8|-|1.59|
We use both the `train` and `unlabeled` splits of MS-COCO, totaling to about 0.24 million images. On computation nodes equipped with $4\times$ NVIDIA A100 GPUs, we can parallelize the processing of images, reducing the time to 1.59 sec/image. With 4 such nodes, we can complete the pseudo-labels generation for 0.24 million images in $1.59\times0.24\times10^6/86400/4\approx1$ day. Our method can be readily extended to an even larger scale of data with more parallel compute nodes.
For model training, the Cascade Mask R-CNN training procedure takes about 20 hours on our node with $4\times$ NVIDIA A100 GPUs. During inference, each image takes 0.15 second on average.
We will include the computational information in our revision.
3. Deciding Thresholds $\theta^\text{merge}$ (Reviewers p4z2, cMyQ)
Instead of relying on the validation performance shown in Table 2, we chose the merging thresholds $\theta_i^\text{merge}$ purely based on computational considerations and empirical observations, ensuring that our approach is fully self-supervised.
- Guidance by number of pseudo-masks: Our choice for $\theta^\text{merge}$ was primarily guided by the **number of pseudo-label masks produced per image**. In the rebuttal PDF, Figure R3 shows the relationship between #masks/image and different thresholds. When $\theta^\text{merge} \ge 0.5$, the number of masks per image escalates rapidly. This steep increase incurs *significant computational costs*, both during the initial generation of pseudo-labels and the subsequent data loading and pre-processing procedures during model training. To strike a balance between computational efficiency and the desired mask granularity, thresholds of $\theta_i^\text{merge} \in \{0.1, 0.2, 0.4\}$ were chosen. This choice is consistent with the suggestion by Reviewer p4z2.
- Threshold generalization across datasets: Another noteworthy observation is the generalizability of these thresholds across various datasets. In the following table, we present the **number of generated pseudo-labels per image** on three datasets. With the merging threshold $\theta^\text{merge}$ fixed, the number of generated labels is relatively stable, regardless of the source image dataset. Therefore, our pre-set thresholds are generalizable and require no further tuning when transferred to other datasets. Meanwhile, our detection model was trained on MS-COCO images with pseudo-labels generated using our $\theta_i^\text{merge}$, and could generalize well to other datasets in a zero-shot manner, as shown in Table 1. This fact shows that the $\theta_i^\text{merge}$ are effective regardless of evaluation datasets.
|$\theta^\text{merge}$|MS-COCO|Objects365|SA-1B|
|---|---|---|---|
|0.1|2.58|3.49|2.91|
|0.2|4.20|5.78|4.88|
|0.4|11.61|12.15|12.70|
Pdf: /pdf/a00aa85c67372ee2f3720eb734f095f41f9c1b1d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Sparsity-Preserving Differentially Private Training of Large Embedding Models | Accept (poster) | Summary: This paper targets the learning of embedding models with DP-SGD.
The main idea is to utilize the sparsity of gradients to the embeddings.
If a mask indicating the sparsity of gradients is given, one can simply adapt DP-SGD by adding noises only to the masked dimensions.
They propose two variants, DP-FEST and DP-AdaFEST, where in DP-FEST the mask is computed beforehand and in DP-AdaFEST the mask will be estimated with differential privacy by thresholding noisy version of the original gradients.
They evaluate the proposed methods on recommendation and language understanding tasks.
Strengths: The problem investigated, i.e. learning embedding models through DP-SGD, is meaningful and the proposed method is very intuitive.
Weaknesses: 1. There seems to be a mismatch of scopes between the title and the rest of the paper. The sparsity of gradients with DP-SGD is only possible with the proposed scheme when there is naturally sparsity introduced by design, such as when each sample involves only a few rows in the entire embedding matrix (i.e. in the vocabulary) and the vocabulary contains many, many embeddings. It is important to have a title accurately reflecting the scope of the work. I would suggest, for example, Sparsity-Preserving Differentially Private Training of Embedding Vocabularies.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See Weakness for some other comments.
1. It is a good practice to include the performances of non-private baselines for empirical experiments. While in many cases there can still be sizable gaps between DP methods and non-private baselines, it can help readers to understand intuitively how large the gaps remain to be.
2. While quite a few work improving DP-SGD are already mentioned in introduction, there are indeed some improvements of DP-SGD that are overlooked ([1-3]). While it is fine to focus on vanilla DP-SGD for experiments since it is still popular, I suggest to discuss these work as part of the background.
**reference:**
[1] Wang, Wenxiao, Tianhao Wang, Lun Wang, Nanqing Luo, Pan Zhou, Dawn Song, and Ruoxi Jia. "DPlis: Boosting Utility of Differentially Private Deep Learning via Randomized Smoothing." Proceedings on Privacy Enhancing Technologies 4 (2021): 163-183.
[2] Shamsabadi, Ali Shahin, and Nicolas Papernot. "Losing less: A loss for differentially private deep learning." (2021).
[3] Park, Jinseong, Hoki Kim, Yujin Choi, and Jaewook Lee. "Differentially Private Sharpness-Aware Training." arXiv preprint arXiv:2306.05651 (2023).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors addressed the limitations fairly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We genuinely appreciate the valuable feedback provided by the reviewer and have addressed them in a point-by-point manner below. We are more than willing to engage in further discussions with the reviewers should any follow-up questions arise.
### **Q1. A better title**
> There seems to be a mismatch of scopes between the title and the rest of the paper. The sparsity of gradients with DP-SGD is only possible with the proposed scheme when there is naturally sparsity introduced by design, such as when each sample involves only a few rows in the entire embedding matrix (i.e. in the vocabulary) and the vocabulary contains many, many embeddings. It is important to have a title accurately reflecting the scope of the work. I would suggest, for example, Sparsity-Preserving Differentially Private Training of Embedding Vocabularies.
**A**: We appreciate the reviewer’s comment. In the revision, we will change the title to "Sparsity-Preserving Differentially Private Training of Large Embedding Models", to more accurately reflect the scope of our work.
### **Q2. Performance of the non-private baseline**
> It is a good practice to include the performances of non-private baselines for empirical experiments. While in many cases there can still be sizable gaps between DP methods and non-private baselines, it can help readers to understand intuitively how large the gaps remain to be.
**A**: Tables 2 & 3 in the Appendix of our submission contain the non-private baseline results for Criteo and NLU tasks. If the reviewer thinks it’d be helpful, we could consider adding in the revision the following table which contains the non-private baseline numbers for all datasets.
| Task | Non-private baseline |
|------------------------------------------|----------------------|
| **Recommendation task** (metric: AUC) | |
| Criteo-Kaggle | 0.8063 |
| Criteo-time-series (streaming period=1) | 0.7846 |
| Criteo-time-series (streaming period=2) | 0.7848 |
| Criteo-time-series (streaming period=4) | 0.7847 |
| Criteo-time-series (streaming period=8) | 0.7820 |
| Criteo-time-series (streaming period=16) | 0.7848 |
| Criteo-time-series (streaming period=18) | 0.7848 |
| **NLU task** (metric: accuracy) | |
| SST-2 | 94.31% |
| QNLI | 92.38% |
| QQP | 91.67% |
### **Q3. New references for improvements of vanilla DP-SGD**
**A**: We thank the reviewer for sharing the new references that improve vanilla DP-SGD; we will add them in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. Since I have no major concern regarding this submission, I think my rating still reflects my assessment of this work. I will keep the score as it is for now.
Have a good day!
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's response and it's great to hear that the reviewer has no major concern about our work. We are also more than willing to continue the discussion if the reviewer has any minor points they'd like to bring up.
Happy a good day too! | Summary: This work aims at improving the performance of DP-SGD on models with a large embedding layer. The main idea is zeroing out the insignificant coordinates such that the amount of added Gaussian noise is reduced. The proposed methods achieve substantial gradient size reduction with marginal performance loss compared with the vanilla DP-SGD.
Strengths: 1. Differential privacy is an important topic in the study of privacy and security. Making DP more practically useful is a main challenge in the current research.
2. This work clearly clarify the background and its methodology, the paper is easy to follow.
Weaknesses: 1. The proposed method is similar to the sparse technologies in DP. There is a long line of works studying sparse technology and DP selection, many of them have also studied large embedding layer in the language models. This work is not properly placed in the contemporary literature.
2. Applying LoRA to the embedding layer can also reduce the gradient size and obtain similar advantages, it would be interesting to compare the proposed methods to such low-rank methods.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. What does "best" mean in the term "best gradient size reduction"?
2. According to my knowledge, if the sparsity pattern is changing consistently, it is hard to take advantage of this property and accelerate the computation. Please correct me if this is wrong for specific hardware, e.g. TPU.
3. Please state the accuracy of the baseline in the main text. This is crucial for the evaluation of the conducted experiments.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We genuinely appreciate the valuable feedback provided by the reviewer and have addressed them in a point-by-point manner below. We are more than willing to engage in further discussions with the reviewers should any follow-up questions arise.
### **Q1. Missing placement of contribution?**
> The proposed method is similar to the sparse technologies in DP. There is a long line of works studying sparse technology and DP selection, many of them have also studied large embedding layer in the language models. This work is not properly placed in the contemporary literature.
**A**: Yes, there has been extensive work on DP selection. In contrast, to the best of our knowledge, there is no prior work that studies the *sparse DP training of large embedding layers in recommender systems and language models*. If the reviewer has pointers to any such references, please let us know and we are happy to incorporate them.
### **Q2. Applicability of LoRA to embedding layers**
> Applying LoRA to the embedding layer can also reduce the gradient size and obtain similar advantages, it would be interesting to compare the proposed methods to such low-rank methods.
**A**: We appreciate the reviewer’s suggestion. However, applying LoRA to the embedding layers is not a common practice due to the following reasons:
1. LoRA was introduced as a method to efficiently adapt matrices of dimensions $n \times d$ in language models by utilizing a rank-$r$ approximation (the initial use case considers the attention layers where $n=d$). The rank-$r$ approximation helps in reducing the memory requirements by a factor of $m \times d/(m+d)*r < min(m,d)/r$. However, in the case of embedding layers, where $n$ represents the vocabulary size and $d$ denotes the embedding dimensionality, a notable disparity exists: the vocabulary size $n$ is typically very large (>1M in the evaluated recommendation task and ~50,000 in the evaluated NLU task), while the embedding dimensionality $d$ is in the hundreds. Consequently, **the potential for improvements using LoRA in this context is limited**.
2. For private training of the embedding layer, using DP-AdaFEST we could still benefit from the efficient embedding lookup (i.e., row fetching operations) via customized APIs. However, LoRA would **not be able to leverage these APIs** as it requires relatively expensive matrix multiplication.
3. While **LoRA needs to adapt a pre-trained model**, our algorithm works in both pre-training and fine-tuning, as demonstrated in the recommender system and language model experiments, respectively.
To more effectively demonstrate the arguments above, the table below compares the best embedding gradient size reductions achieved by AdaFEST and LoRA against DP-SGD for SST-2 with $\epsilon=1.0$. We vary LoRA's rank $r$ from {4, 8, 16, 32, 64, 128}. AdaFEST consistently outperforms LoRA in gradient size reduction at similar utility levels. Additionally, AdaFEST allows effective utilization of reduced gradients through customized APIs, while the implications of LoRA's reduction remain unclear.
| **Utility loss compared to DP-SGD** | **Best gradient size reduction, AdaFEST** | **Best gradient size reduction, LoRA** |
|---|---|---|
| 0.001 | 17.41x | 5.91x |
| 0.005 | 62.14x | 23.64x |
| 0.01 | 62.14x | 47.28x |
### **Q3. Speedup for changing sparsity patterns?**
> According to my knowledge, if the sparsity pattern is changing consistently, it is hard to take advantage of this property and accelerate the computation. Please correct me if this is wrong for specific hardware, e.g. TPU.
**A**: Leveraging sparsity efficiently often requires specialized algorithms and hardware capable of handling sparse data. Fortunately, while the sparsity exploited by our algorithm is dynamic, it follows a consistent structure that only a small number of **rows** of the gradient of the embedding layer are non-zero. This (dynamic) structure can be efficiently exploited by modern accelerators that provide dedicated implementation for embedding lookups.
For example, Google Cloud TPUs can efficiently exploit such dynamic sparsity patterns. Appendix C.2 demonstrates significant wallclock time improvement using TPUEmbedding (please refer to footnote 6 in our Appendix C.2.1 for the link), the sparse embedding API provided by Google Cloud TPUs. This showcases TPUs' proficiency in handling sparse operations with compressed representations and efficient memory access patterns, proving their ability to make use of the dynamic sparsity patterns here.
### **Q4. What does "best" mean in the term "best gradient size reduction"?**
**A**: Our experiments explore various combinations of hyperparameters (Section 4.4 has the details) and we reported the maximum gradient size reduction achieved at different utility requirements. We will clarify this in the revision.
### **Q5. Please state the accuracy of the baseline in the main text**
**A**: We appreciate the great suggestion. We will report the accuracy in the main text in the revision.
---
Rebuttal Comment 1.1:
Title: Response by Reviewer
Comment: Thank you to the authors for providing detailed responses.
My concerns regarding Q2 and Q4 have been addressed. I'd like to further clarify Q1, which is my major concern, and I'm not fully convinced by the response to Q3. Additionally, since the authors haven't updated the baselines' accuracies, I'm unable to evaluate some experimental results as stated in Q5.
## Q1:
The authors have positioned this paper as the pioneering work in studying DP networks with large embedding layers.
*Line 48 - 49*
> ...this study is the first to address the technical challenges of applying DP-SGD to large embedding models.
While DP on large language models has been widely researched, with many previous works cited in this paper (e.g., [YNB+ 22] and [LTLH21]), it's worth noting that LTLH21 argued that DP full fine-tuning might outperform non-private fine-tuning. Thus, the above claim appears misleading or exaggerated. Moreover, this work's experimental settings, such as utilizing the RoBERTa network and certain language datasets, have been previously explored. Hence, this paper is not treading new ground.
I perceive the primary focus of this work as preserving sparsity. From an abstract standpoint, the gradient of the embedding layer could be viewed as a vector with mostly zero elements. Numerous studies have tackled sparse vectors in the context of DP. I see the authors also agree with that, so I won't take the time to look up and copy the links. However, I find a gap in the introduction of relevant methods, and it is unclear why this specific form of DP-AdaFEST is chosen.
The concept of sparing the addition of noise to large embedding layers dates back at least two years [1].
In summary, preserving the sparsity of the embedding layer has been proposed before. The technology for preserving sparsity, e.g. DP selection, has been extensively studied. There are many previous works studying applying DP to large embedding models. As a result, I believe this work lacks novelty and a more thorough introduction of related works is needed.
## Q3
Upon examining the wall-clock time reduction experiments (Appendix C.2.1), the results were astonishing: the proposed method appears to be approximately 200 times faster. My understanding of computation acceleration references cited here suggests that sparse gradients can be computed faster than dense gradients, thanks to sparse matrix multiplication. However, in the context of this paper, sparse gradients are computed initially, with the only distinction being whether sparsity is preserved during nosification. This preservation would only affect later parameter updates, amounting to mere matrix addition. I fail to see why this can lead to such a leap in computational speed.
I wish to emphasize that sparsity is often exploited for computation acceleration due to sparse matrix multiplication. However, the proposed method doesn't involve this operation. Could the author elaborate on how preserving sparsity during nosification substantially boosts computational efficiency?
## Q5
Please update the baseline accuracy during the discussion period. I believe the numbers should have been saved, and therefore no need to rerun all the experiments. Without this information, it is impossible to evaluate whether the proposed method is comparing with a reasonable baseline.
[1] Wide Network Learning with Differential Privacy. Huangyu Zhang et al., 2021
---
Reply to Comment 1.1.1:
Title: Response to follow-up questions (1/3)
Comment: We appreciate the reviewer for taking the time to read our response and sharing follow-up concerns.
### **Q1. Missing placement of contribution?**
> While DP on large language models has been widely researched, with many previous works cited in this paper (e.g., [YNB+ 22] and [LTLH21]), it's worth noting that LTLH21 argued that DP full fine-tuning might outperform non-private fine-tuning. Thus, the above claim appears misleading or exaggerated. Moreover, this work's experimental settings, such as utilizing the RoBERTa network and certain language datasets, have been previously explored. Hence, this paper is not treading new ground.
**A**: We appreciate the reviewer for the pointers to the related work and for acknowledging that we've already discussed them in our submission. We hope to further clarify that the focus of our work is significantly different from [YNB+ 22] and [LTLH21].
[YNB+ 22] didn’t study how to effectively train large embedding layers under DP; they froze the embedding layers in DP-fine tuning and investigated DP’s compatibility with parameter-efficient fine-tuning methods for attention layers such as LoRA and Adapter. As we noted in the response to your Q2, when applied to embedding layers, the gradient size reduction introduced by LoRA is inferior to our proposals.
[LTLH21] observed that embedding layers can significantly contribute to memory consumption during DP training of large language models. To address this, they applied ghost clipping (see Sec 4.2 in their paper) to avoid generating per-example gradients, thereby reducing the memory impact of embedding layers during training. We'd like to highlight the distinctions between our work and [LTLH21]:
- **Primary goal**: Our primary goal is model efficiency, whereas [LTLH21] primarily targets memory reduction. However, please note that our methods also lead to reduced memory usage as a secondary benefit, because of the reduction in gradient size.
- **Specific strategies**: We deliberately induce sparsity within embedding layers, whereas [LTLH21] focuses on optimizing per-example gradient operations to save memory.
- **Beyond NLP models**: We focus on the sparsity of large embedding layers. While this benefits many transformer-based NLP models, an equally significant contribution of our study is in the systematic evaluations on recommender systems, where the embedding layers could occupy up to 90% of the model weights, and techniques like LoRA cannot be easily applied because there are no public pre-trained models.
> I perceive the primary focus of this work as preserving sparsity. From an abstract standpoint, the gradient of the embedding layer could be viewed as a vector with mostly zero elements. Numerous studies have tackled sparse vectors in the context of DP. I see the authors also agree with that, so I won't take the time to look up and copy the links. However, I find a gap in the introduction of relevant methods, and it is unclear why this specific form of DP-AdaFEST is chosen.
Yes, we indeed agree with the reviewer that there are numerous studies of the abstract problems of sparse vectors in DP. Our main contribution is applying such techniques to **large embedding models**, and **systematic evaluations** that demonstrate practical benefits in large-scale models in recommender systems and language models.
We will expand our discussion of related work to fill the gap. This specific form of DP-AdaFEST was chosen to balance simplicity (minimum modification to existing DP-SGD privacy accounting) and practical performance. But we leave the extensive comparison of different sparse-preserving DP mechanisms to future work.
> The concept of sparing the addition of noise to large embedding layers dates back at least two years [1].
We appreciate this pointer, and will try to report comparison results before the end of the rebuttal period for completeness. However, we respectfully hold a different opinion regarding the reviewer’s assessment of [1] tackling *“large”* embedding models, as [1] only evaluated with a small Word2Vec model: they use an embedding layer with vocabulary size $|V|$=1000 and embedding dimension $d$=100, resulting in an embedding layer of size 100,000 (only twice the size of the simple LeNet-5 architecture for digit classification). While in our evaluation, the recommendation task involves the usage of $|V|$=1.7M and total embedding params of 77M, and the NLU task has $|V|$=50265 and $d$=768, and total embedding params of 38M. In summary, the models trained in our work are multiple orders of magnitude larger than [1], and significantly more representative of *large* embedding models used in practice. | Summary: This paper considers an interesting and less-studied aspect of DP-SGD: Applying naively to embedding models can destroy gradient sparsity, reducing training efficiency.
To address this issue, the paper proposes two algorithms DP-FEST and DP-AdaFEST that apply DP while maintaining gradient sparsity during the training of large embedding models.
Strengths: The authors did a great job motivating the problem in the introduction. The question addressed in this work is of great importance and a reasonable answer to it can potentially have significant consequences, as evidenced by the experiments.
Weaknesses: The paper lacks any rigorous analysis of the proposed algorithms. More precisely, there is no privacy analysis of DP-FEST and DP-AdaFEST.
Instead, the authors only mentioned (line 169-170): "In particular, the privacy cost of a single iteration is equivalent to that of the composition of two Gaussian mechanisms with noise scale $\sigma_1$
and $\sigma_2$ respectively, which in turn is equivalent to the privacy cost of a single Gaussian mechanism of noise scale $\sigma = (\sigma_1^{-2}+\sigma_2^{-2})^{-1/2}$." Why not using this result to determine a formal privacy analysis of the algorithm?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is there any gap between the privacy guarantee of SP-SGD and that of DP-AdaFEST? If so, can the authors quantify it?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We genuinely appreciate the valuable feedback provided by the reviewer and have addressed them below. We are more than willing to engage in further discussions with the reviewers should any follow-up questions arise.
### **Q1. Lack of rigorous privacy analysis**
> The paper lacks any rigorous analysis of the proposed algorithms. More precisely, there is no privacy analysis of DP-FEST and DP-AdaFEST. Instead, the authors only mentioned (line 169-170): "In particular, the privacy cost of a single iteration is equivalent to that of the composition of two Gaussian mechanisms with noise scale $\sigma_1$ and $\sigma_2$ respectively, which in turn is equivalent to the privacy cost of a single Gaussian mechanism of noise scale $\sigma = (\sigma_1^{-2} + \sigma_2^{-2})^{-1/2}$." Why not using this result to determine a formal privacy analysis of the algorithm? Is there any gap between the privacy guarantee of SP-SGD and that of DP-AdaFEST? If so, can the authors quantify it?
**A**: Please note that a detailed privacy analysis was not included since we felt it is standard. Indeed, the analysis is near-identical to that of DP-SGD and involves repeated application of the sub-sampled Gaussian mechanism with scale $\sigma$. In DP-AdaFEST, we have a repeated application of a sub-sampled mechanism where the inner mechanism involves application of two Gaussian mechanisms with noise scales $\sigma_1$ and $\sigma_2$. It is well-known (e.g., [1, Corollary 3.3]) that such an inner mechanism is privacy-wise equivalent to an application of a single Gaussian mechanism with noise scale $\sigma = (\sigma_1^{-2} + \sigma_2^{-2})^{-1/2}$, and thus, **DP-AdaFEST is privacy-wise equivalent to a sub-sampled Gaussian mechanism with scale $\sigma$**.
The privacy analysis for DP-FEST is also straightforward, since it involves the composition of the first frequency filtering step and the second training step which is privacy-wise equivalent to a sub-sampled Gaussian mechanism.
We thank the reviewer again for raising this point and we will add these details in the revision.
**References**:
[1] Jinshuo Dong, Aaron Roth, Weijie J. Su. Gaussian Differential Privacy.
---
Rebuttal Comment 1.1:
Title: Follow-up
Comment: Dear reviewer, did our explanation clarify your question on the privacy analysis? Since this seems to be the main concern from the reviewer, we would like to make sure we addressed it before the rebuttal deadline ends. To summarize the argument with more details:
* Both DP-SGD and DP-AdaFEST are performing repeated application of an “inner mechanism” on sub-sampled batches of data. The “inner mechanism” is a single Gaussian mechanism in case of DP-SGD, and is application of two Gaussian mechanisms in case of DP-AdaFEST.
* It is known from [1, Corollary 3.3] that the privacy loss random variable corresponding to the composition of two Gaussian mechanisms with noise scales $\sigma_1$ and $\sigma_2$ is identical to the privacy loss random variable of a single Gaussian mechanism with noise scale $\sigma = (\sigma_1^{-2} + \sigma_2^{-2})^{-1/2}$.
* Hence, the privacy guarantee of DP-AdaFEST with noise scales $\sigma_1$ and $\sigma_2$ is equivalent to the privacy guarantee of DP-SGD with noise scale $\sigma = (\sigma_1^{-2} + \sigma_2^{-2})^{-1/2}$. | Summary: This paper focuses on the concept of gradient sparsity in large embedding models during training, with a particular focus on privacy-preserving methods. The commonly used DP-SGD approach adds noise to all embedding gradients, even those that may not appear in the current batch, in order to ensure privacy during model updates. The authors propose algorithms aimed at preserving gradient sparsity while achieving privacy in the training of these models. By doing so, they aim to improve the efficiency and effectiveness of private training for large embedding models.
Strengths: The paper is well-written and the presentation is clear, which is helpful to follow and understand the paper. The problem is clearly described, DP-SGD adding noise to all embedding gradients destroys the sparsity and the authors propose approaches for private training while keeping the sparsity for effectiveness. The experiments (in certain settings, see below) empirically support the claims of the paper.
Weaknesses: I think the main weakness of the paper is that the approach is limited in the sense that it is helpful in more specific scenarios where the embedding layer is dominating the network. CTR prediction task might be a good example for this where the sparsity in embedding layer matters but for the language models, the reviewer is not so sure about the effectiveness of this approach. More details are in the questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) The authors use the RoBERTa model for downstream classification tasks from the GLUE benchmark. The model has 50k vocab size. Prior work that the authors cite shows that DP-SGD provides strong performance on large batch sizes (approx 1k-10k) and considering that each sample has (approx 128-512) tokens, do we expect to see the gradient sparsity phenomenon in DP-SGD training of large language models with these hyperparams? The paper shows embedding gradient sparsity in Ads model but I do not see any discussion around this regarding language models.
2) For large language models, transformer layers are in general the dominating part of the model. Could the authors provide examples for how/when this sparsification approach of the embedding layer be effective and help concretely in what sense?
3) Actually DP-SGD hyperparameters for language model experiments are not clearly provided. I only see the sentence that says "we fine-tune the clipping norm and report the best accuracy". On the other hand, it's known again from prior work that when set sufficiently small, clipping norm does not play major role in the performance of the model but batch size and learning rate are the critical hyperparameters.
Minor: Many Appendix B references seems to have been pointed to Appendix C.
I can see the contributions of the paper but perhaps rather in a more limited case where the embedding layer is the critical part of the model in terms of size and time bottleneck. It just seems to me that (large) language models may not really be the representative of this scenario.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors adequately addressed the limitations according to this reviewer (apart from the reviewer's questions above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We genuinely appreciate the valuable feedback provided by the reviewer and have addressed them in a point-by-point manner below. We are more than willing to engage in further discussions with the reviewers should any follow-up questions arise.
### **Q1. Lack of discussions of language models**
> Prior work that the authors cite shows that DP-SGD provides strong performance on large batch sizes (approx 1k-10k) and considering that each sample has (approx 128-512) tokens, do we expect to see the gradient sparsity phenomenon in DP-SGD training of large language models with these hyperparams? The paper shows embedding gradient sparsity in Ads model but I do not see any discussion around this regarding language models.
**A**: Great question! The following table reports the number of unique tokens in the batch and the corresponding gradient sparsity for the RoBERTa model on the SST-2, QNLI, and QQP datasets. The table also includes the average sequence length for each dataset. The vocabulary size is 50,265.
| **Batch size** | **SST-2 (avg. seq length: 14.4)** | | **QQP (avg. seq length: 29.1)** | | **QNLI (avg. seq length: 36.7)** | |
|---|---|---|---|---|---|---|
| | # unique tokens | Gradient sparsity | # unique tokens | Gradient sparsity | # unique tokens | Gradient sparsity |
| 16 | 150.6 | 0.003 | 209.2 | 0.004 | 342.0 | 0.007 |
| 64 | 486.4 | 0.001 | 667.2 | 0.013 | 1142.2 | 0.023 |
| 256 | 1,363.0 | 0.027 | 1,948.2 | 0.039 | 3,528.0 | 0.070 |
| 1,024 | 3,637.2 | 0.072 | 5,062.8 | 0.101 | 9,054.0 | 0.180 |
| 4,096 | 7,637.6 | 0.152 | 11,306.2 | 0.225 | 18,556.2 | 0.369 |
| 16,384 | 11,982.8 | 0.238 | 21,605.6 | 0.429 | 30,204.6 | 0.600 |
| 65,536 | 14,105.0 | 0.281 | 31,494.0 | 0.657 | 37,403.0 | 0.744 |
From this table, even for large batch sizes (1,024-16,384), the number of unique tokens is still much smaller (~2-14x) than the overall vocabulary size. Intuitively, this is because a significant portion of the vocabulary comprises uncommon tokens, which experience infrequent updates during training. Thus our algorithm can be applied even with large batch sizes.
We also note that while very large batch sizes (batch size of >1M) seem essential to achieve strong performance for private pre-training [1], this is not the case for private fine-tuning [2] (batch size of ~2k).
**References**:
[1] Anil R, Ghazi B, Gupta V, Kumar R, Manurangsi P. Large-scale differentially private BERT. EMNLP 2022.
[2] He J, Li X, Yu D, Zhang H, Kulkarni J, Lee YT, Backurs A, Yu N, Bian J. Exploring the limits of differentially private deep learning with group-wise clipping. ICLR 2023.
### **Q2. Applicability of the proposed algorithm on LLMs**
> For large language models, transformer layers are in general the dominating part of the model. Could the authors provide examples for how/when this sparsification approach of the embedding layer be effective and help concretely in what sense?
**A**: We thank the reviewer for raising this point. Our evaluation primarily centered around the RoBERTa model (vocab size: 50k) because it is a standard backbone. However, it is important to note that many high-vocabulary models exist, with vocabulary sizes 5-20x larger than RoBERTa (shown below). For these models, our proposed sparsification method, DP-AdaFEST, could offer even more pronounced benefits.
| **Model** | **Vocabulary size** |
|---|---|
| XLM-R [1] | 250k |
| VoCAP [2] | 250k ~ 500k |
| XLM-V [3] | 250k ~ 1M |
For example, the following are the results on the XLM-R model for the Cross-Lingual Natural Language Inference (XNLI) task when $\epsilon$=1.0. As shown, DP-AdaFEST is able to achieve **a gradient size reduction of >150x** at a minimal utility loss of 0.01.
| **Utility loss compared to DP-SGD** | **Best gradient size reduction by DP-AdaFEST** |
|---|---|
| 0.001 | 19.84x |
| 0.005 | 73.42x |
| 0.01 | 162.13x |
We would like to further clarify that our proposed algorithm is mainly designed to preserve gradient sparsity of embedding layers in a black box fashion. However, users have the flexibility to apply any desired algorithm to enhance model sparsity for other layers (e.g., by employing LoRA for the transformer blocks). We believe this modular approach allows users to tailor the sparsity optimization according to their specific requirements for different layers within the model.
**References**:
[1] Conneau A, Khandelwal K, Goyal N, Chaudhary V, Wenzek G, Guzmán F, Grave E, Ott M, Zettlemoyer L, Stoyanov V. Unsupervised cross-lingual representation learning at scale. ACL 2020.
[2] Zheng B, Dong L, Huang S, Singhal S, Che W, Liu T, Song X, Wei F. Allocating large vocabulary capacity for cross-lingual language model pre-training. EMNLP 2021.
[3] Liang D, Gonen H, Mao Y, Hou R, Goyal N, Ghazvininejad M, Zettlemoyer L, Khabsa M. Xlm-v: Overcoming the vocabulary bottleneck in multilingual masked language models. Arxiv preprint.
### **Q3. Hyperparameters for NLU tasks**
> DP-SGD hyperparameters for language model experiments are not clearly provided… it's known again from prior work that when set sufficiently small, clipping norm does not play major role in the performance of the model but batch size and learning rate are the critical hyperparameters.
**A**: Please note that Appendix C.1 of our submission contains the hyperparameter choices for NLU tasks. Specifically, to constrain the search space, we fix the batch size to 1024, and vary the learning rate in {5e-4, 1e-3, 2e-3, 5e-3}, and vary the clipping norm in {0.1, 1.0, 2.0, 5.0, 10.0}. For convenience, the table below contains the optimal selection of hyperparameters for various NLU tasks when $\epsilon$=1.
| | **SST-2** | **QQP** | **QNLI** |
|---|---|---|---|
| Learning rate | 1e-3 | 5e-3 | 5e-3 |
| Clipping threshold | 10.0 | 10.0 | 10.0 |
### **Typo: Many Appendix B references seem to have been pointed to Appendix C.**
**A**: We appreciate the reviewer’s careful reading, and we will fix these issues in the revision.
---
Rebuttal Comment 1.1:
Title: Response to the Authors
Comment: The reviewer appreciates the extra work the authors put in to answer questions of the reviewer. The first table shows that indeed in private fine-tuning with DP-SGD friendly hyperparameters, the sparsity phenomenon still shows up. That's helpful, thanks very much. Regarding my Q2, adding other sparsities like LoRA is an interesting point. Because it seems to me that with full fine-tuning, dominating BERT and GPT-family LLMs include embedding layer as a small part of the overall model so the sparsity may not really help much in the overall runtime etc. as the gradient in other layers will already dominate things. But indeed if one fine-tunes with LoRA + embedding layer, then things might change. But I am not really sure if this is common practice, i.e. if we are fine-tuning with LoRA, do we even need to add the embedding layer to the fine-tuning? Does it really provide an extra improvement?
In any case, the reviewer increases their rating and thanks the authors for their responses.
---
Reply to Comment 1.1.1:
Title: Thank you & on whether fine-tuning word embeddings benefits LoRA
Comment: **A**: We appreciate the reviewer’s response and for increasing the score. We are glad that our additional results help address the reviewer’s concern.
> But indeed if one fine-tunes with LoRA + embedding layer, then things might change. But I am not really sure if this is common practice, i.e. if we are fine-tuning with LoRA, do we even need to add the embedding layer to the fine-tuning? Does it really provide an extra improvement?
We appreciate this follow-up question. Typically, full-parameter fine-tuning with embedding layers updated leads to better performance compared to (efficient) partial-parameter fine-tuning such as LoRA. For instance, in Table 3 of [1], full-parameter fine-tuning (FT) surpasses LoRA (LR) across most tasks. In addition, Table 15 of the LoRA paper [2] also shows that combining LoRA with prefix-embedding tuning (which injects trainable word embeddings) improves the performance of vanilla LoRA, which suggests that employing trainable word embeddings with LoRA could lead to potential improvements in utility.
> Does it really provide an extra improvement?
**Yes**. As demonstrated in Table 3 in our Appendix C.2.3, applying LoRA during fine-tuning, along with simultaneous updates to the embedding layer, offers better utility compared to using LoRA alone.
Once again, we value the prompt feedback from the reviewer and remain available for further discussion if any new questions emerge.
**References**
[1] Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu et al. Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. arXiv preprint.
[2] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. ICLR 2022. | Rebuttal 1:
Rebuttal: We express our gratitude to the AC and all reviewers for their time and valuable feedback. We appreciate the acknowledgment that "the question addressed in this work is of great importance" and that "the proposed method is very intuitive/effective."
Below, we provide a summary of the comments and our corresponding responses for clarity. For more details, please refer to the individual responses to each reviewer. We are more than willing to engage in further discussions with the reviewers should any follow-up questions arise.
## **1. Our contribution and comparison with LoRA**
- **Novelty** (Reviewer [KMWx](https://openreview.net/forum?id=sqTcCXkG4P¬eId=JN1qnViLX3), Q1): We kindly clarified that to the best of our knowledge, no prior research has addressed sparse DP training of extensive embedding layers in recommender systems and language models. We welcome any references the reviewer might have on this topic for potential inclusion.
- **Comparison with LoRA** (Reviewer [KMWx](https://openreview.net/forum?id=sqTcCXkG4P¬eId=JN1qnViLX3), Q2): We detailed the limitations of applying LoRA to embedding models, including 1) limited room for enhancement, 2) challenges in harnessing its gradient size reduction, and 3) constraints on fine-tuning (unlike our pre-training adaptable methods). Furthermore, our empirical results show that LoRA underperforms in gradient size reduction compared to our proposed approaches.
## **2. Applicability of our methods**
- **For large batch sizes in language models** (Reviewer [upr1](https://openreview.net/forum?id=sqTcCXkG4P¬eId=T4kwfmPHxH), Q1): Our demonstrated sparsity pattern shows that even with substantial batch sizes (1,024-16,384), the unique token count remains much lower (~2-14x) than the total vocabulary size. This affirms the applicability of our algorithm to larger batch sizes.
- **For larger vocabulary language models** (Reviewer [upr1](https://openreview.net/forum?id=sqTcCXkG4P¬eId=T4kwfmPHxH), Q2): We showcased our method's capability to achieve greater gradient size reduction in language models with expanded vocabularies (e.g., 250k compared to 50k in our submission).
- **Compatibility with sparsity optimization in other layers** (Reviewer [upr1](https://openreview.net/forum?id=sqTcCXkG4P¬eId=T4kwfmPHxH), Q2): We clarified that our algorithm primarily preserves gradient sparsity in embedding layers in a black box manner. However, users retain flexibility to apply desired methods for enhancing model sparsity in other layers (e.g., using LoRA for transformer blocks).
## **3. Privacy Analysis** (Reviewer [ah8R](https://openreview.net/forum?id=sqTcCXkG4P¬eId=LZFIK6TFq5), Q1)
We clarified that our proposal is privacy-wise equivalent to a sub-sampled Gaussian mechanism, and provided more details for the privacy analysis.
## **4. Clarification questions**
We’ve also
- Clarified that the dynamically changing gradient sparsity patterns in our methods can be effectively leveraged by customized APIs such as TPUEmbedding (Reviewer [KMWx](https://openreview.net/forum?id=sqTcCXkG4P¬eId=JN1qnViLX3), Q3);
- Explained our evaluation metric (Reviewer [KMWx](https://openreview.net/forum?id=sqTcCXkG4P¬eId=JN1qnViLX3), Q4).
## **5. Planned changes to the manuscript**
We will also make the following changes in the revision as suggested:
- (Reviewer [AuCL](https://openreview.net/forum?id=sqTcCXkG4P¬eId=XBm5ltP096), Q1) Change the title to "Sparsity-Preserving Differentially Private Training of Large Embedding Models", to more accurately reflect the scope of our work;
- (Reviewer [ah8R](https://openreview.net/forum?id=sqTcCXkG4P¬eId=LZFIK6TFq5), Q1) Provided a detailed privacy analysis for the proposed method;
- (Reviewers [KMWx](https://openreview.net/forum?id=sqTcCXkG4P¬eId=JN1qnViLX3) Q5 & [AuCL](https://openreview.net/forum?id=sqTcCXkG4P¬eId=XBm5ltP096), Q2) Report non-private baseline numbers for all datasets;
- (Reviewer [upr1](https://openreview.net/forum?id=sqTcCXkG4P¬eId=T4kwfmPHxH), Q3) Provide more details for the hyper-parameters for NLU tasks;
- (Reviewer [AuCL](https://openreview.net/forum?id=sqTcCXkG4P¬eId=XBm5ltP096), Q3) Add new references for improvements of vanilla DP-SGD;
- (Reviewer [upr1](https://openreview.net/forum?id=sqTcCXkG4P¬eId=T4kwfmPHxH)) Fix typos. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
What’s Left? Concept Grounding with Logic-Enhanced Foundation Models | Accept (poster) | Summary: This paper introduces Neuro-FOL, a method combining a large language model for generating FOL programs, an FOL executor and trainable concept grounding modules to improve performance on a number of visual/3D QA style tasks.
In the method, a LLM interpretor is used to generate a FOL program that is not specific or grounded to the exact domain. A differentiable first-order logic executor is then used to hierarchically evaluate the value of the program by using learnable, domain-specific grounding modules to ground the program in the specific domain.
The paper evaluates in 4 settings: 2D question answering, 3D referring expressions, temporal sequence reasoning and robotic manipulation. The method performs similarly to both NS and end-to-end methods in the normal data settings shown in Table 2, beats end-to-end methods in low data settings (Table 4). The paper also shows good performance on a set of held-out CLEVR tasks.
Strengths: I think main idea of the paper is really clever and well motivated. I think using a pre-trained LLM to do the logical breakdown of the query by generating a program, which can then be differentially trained is really smart. It gives you a way to have both the generality you get from LLMs having seen many concepts and the flexibility of adapting to a new domain.
The wide variety of domains and datasets evaluated is really good. Shows that the method works across a wide variety of different domains.
Shows really good performance compared to end-to-end methods in the rare data case, which is good.
Paper is pretty clearly presented and motivated.
Weaknesses: I think the claim of it being a universal language for all domains and therefore generally applicable across domains is maybe misleading and not totally supported. I think I agree that you could generate a program for pretty much any query-based problem, but it sort of missing a critical assumption, that the method has sufficient data to train each of the domain-specific module that you would generate. And of course, if during evaluation, if the program generated a module that it hadn't seen during training, it would fail. I think this is a real limitation of the method that should be more clearly mentioned in the paper.
Related to the above point, the set of domains is somewhat limited, and especially the kinds of reasoning here are pretty narrow. For instance, the 2D question answering is all done on CLEVR where the visually domain is extremely limited (only a set number of different block objects in a set number of colors) and a rather limited space of kinds of questions (questions chaining together reasoning about objects color, relative position, count etc). There are for instance many kinds of questions about images (for instance in the VQA dataset) that might not really have good coverage in the training set to work. If you had to train a new MLP classifier for each kind of attribute, object class and relation you might find in VQA, it's not totally obvious to me that this method would have enough data to actually work. SR3D is definitely more visually diverse in the number of object types, but the question types are also quite limited to these kinds of spatial reasoning questions. Similarly, robotic manipulation is more visually diverse and in the number of objects, but also suffers from a lack of diversity in the kinds of queries it needs to perform.
I get that the advantage of this method over pure NS approaches is that you do not need to manually define a domain-specific language, but the comparisons don't really any improvement over them. The claims about universality would be stronger if the domains were less constrained, showing for instance that this method can deal with unusual domain groundings in a way that would be hard to anticipate for a designer for NS methods.
Kind of a minor point, but I think the point on line 105 about VLMs not working on different domains such as 3D scenes and motion sequences is pedantic. Models not trained on 3D scenes and motion sequences of course don't generalize to these, but one could train a network on these domains as well. For instance: Gato (S Reed 2022) is trained on multiple kinds of modalities.
"Flamingo" in Table 6 should REALLY be stated to be Open-Flamingo. This is especially confusing since the paper uses the citation when it names the method. But it's not using the Flamingo model, it's using an open-source alternative to the model not Flamingo itself, which is very confusing/misleading.
A couple of unanswered questions I had (see below).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: In Table 6, is Neuro-FOL trained on the regular CLEVR task before being put in the held-out CLEVR tasks? What about [Open] Flamingo?
In general, could you more clearly state in the paper the training loop for Neuro-FOL. I sort of infer that it is trained end-to-end on the target tasks, but that the only parts of the network that are actually updated are the domain specific grounding modules? Is that correct?
How does your method respond to very rate concepts, if you need to finetune an MLP for every grounding function. Do you see cases where a concept has very few examples during training and that module performs poorly?
How often does the LLM fail to provide a valid program and how often does it fail to provide a correct program? Does an invalid program result in Neuro-FOL failing to run?
If the program is incorrect during training, does this hurt performance in eval?
How robust is the method in general to program failures?
In general, what are the failure cases of this method?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 4 excellent
Limitations: I don't think that the paper does much to address the limitations of their methodology (see above) which I think would add a lot of value to the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q: Sufficient data for training grounding modules.**
A: Thank you for bringing up this point! We agree that Neuro-FOL may fail with insufficient data, which is a limitation for most learning systems trained solely on one given dataset. We provide three main perspectives here.
1. Neuro-FOL is notably more data efficient than end-to-end methods due to its modular structure (decomposing complex concepts into primitive ones).
2. To partially mitigate this issue, Neuro-FOL instructs LLMs to canonicalize concepts when reasonable (e.g., "in between" and "sandwiched" may be interpreted to "between"), which helps learning certain rare concepts.
3. It is possible and straightfoward to incorporate externally trained modules. For example, we can plug in Grounding DINO [1] for object detection. However, for some tasks, there are no off-the-shelf general visual recognition models (e.g., relations and human motions); therefore, we train grounding modules solely from the dataset. We fully agree that how to better integrate pretrained models by finetuning, or by incrementally learning new concepts, is important. And indeed as you mentioned, if Neuro-FOL sees a completely new concept during evaluation, it will fail, similar to other methods trained solely on the given data. We will clarify this limitation in the main text!
[1] Liu, Shilong, et al. "Grounding DINO." 2023.
**Q: Rare concepts.**
A: As mentioned above, Neuro-FOL’s modularity enables data-efficient learning of rare concepts: given one datapoint each for “rainbow fire hydrant” and “rainbow sink”, end-to-end methods may have one datapoint each of the two concepts, while Neuro-FOL will have two datapoints for “rainbow”, and potentially leveraging data for “fire hydrant” and “sink” elsewhere. This is partially reflected in our data efficiency comparisons. Yet without enough data, Neuro-FOL’s performance will indeed suffer. For example, in ReferIt3D, the concept "clothes" appears only 18 times in training, and its classification accuracy is 0.207. Similarly: "cart", 84 times, accuracy 0.320. By contrast, "couch", 3640 times, 0.920, and "sink", 3028 times, accuracy 0.892.
**Q: Diversity of evaluation domains.**
A: We agree that the long-tail problem is important, and difficult. However our goal here is to integrate domain-specific concept learning with a unified, domain-independent reasoning framework, hence we defer the integration of few-shot learning methods to future work. We also note that some domains we evaluate on do have complex reasoning structures; in NR3D (Table 3) we see both visual and question diversity. Complex query examples include: “It's the middle towel out of the 3 that's all beside each other, disregard the only towel”, “If you are sitting in bed with your head against the headboard, it is the nightstand to your left.”
**Q: Universality of FOL.**
A: Thank you for raising this point. We do not claim to outperform prior neuro-symbolic methods as they represent "oracle" implementations for particular datasets. We instead focus on a unified framework across domains and tasks — with the benefits of neuro-symbolism, and minimizes the need for domain-specific knowledge.
In the global response, we have also added code samples that highlight how Neuro-FOL’s unified reasoning modules enable flexible neuro-symbolic learning across domains.
Additionally, in the CLEVR-Puzzle task (Section 4.2), we demonstrate Neuro-FOL in a setting where the language instruction cannot be parsed by the original CLEVR DSL, however, it can be represented by combining concepts with our general FOL, showing that Neuro-FOL can handle new tasks that are hard to anticipate for original DSL designers.
**Q: Training data for held-out CLEVR tasks.**
A: Yes, Neuro-FOL is trained on the original CLEVR with 70,000 images; Open Flamingo is trained on LAION 2B with 5 million image-text examples from Multimodal C4.
**Q: Training loop.**
A: Yes, the domain-specific functions (feature extractors and concept MLPs) are trained through backpropagation, as our executor is fully differentiable, and we use LLMs inference-only.
**Q: Program generation.**
A: LLMs occasionally fail to provide valid programs for complex natural language queries: (rate 0.102 for CLEVR-Humans, 0.088 for NR3D). An invalid program will cause Neuro-FOL to fail. In the global response, we further compare this with VisProg and ViperGPT (both use LLMs for program generation). Neuro-FOL produced syntactically valid programs for 100% of queries in CLEVR-RPM (c.f. ViperGPT's 40%). For semantic correctness, since we can not automatically label programs, we evaluate execution results.
**Q: Effect of program correctness & program robustness.**
A: If the program is incorrect, it is either (1) not executable, hence we lose a datapoint, or (2) semantically incorrect, which will harm performance. In our experience, Neuro-FOL programs have high semantic accuracy across tasks. For syntax errors, we can easily detect and resample. However, there is currently no way to recover from semantic errors. We believe a promising direction is to finetune the LLM from execution feedback.
**Q: Failure cases.**
A: In our experiments, FOL programs are typically correct, but the domain-specific functions are more difficult to learn. For example, in ReferIt3D, there are 607 categories, with many classes visually similar (e.g., cabinet and dresser). We see that Neuro-FOL classification accuracy for concept grounding is not very high. Hence, we can potentially rely on more classification labels, instead of only language instructions, to improve performance on this task. Additionally as mentioned, program failures and rare concepts also decrease performance.
**Q: VLMs across domains & Open-Flamingo.**
A: Thank you for the suggestions, we will clarify!
**Q: Limitations.**
A: We agree that an extended discussion section tackling limitations and future work will be helpful! We will add this to the main text.
---
Rebuttal Comment 1.1:
Title: Happy to answer any further questions
Comment: Dear Reviewer DugZ,
Thank you for reviewing our submission. We have posted our response per your suggestions and questions. We are happy to discuss with you and answer any further questions. As the deadline for discussion is approaching, we very much look forward to your feedback.
Thank you,
Authors
---
Rebuttal Comment 1.2:
Title: Response
Comment: Sorry for the late reply.
I think the response answered a lot of my questions. Given that and that I liked the paper to begin with, I will increase my score by one.
---
Reply to Comment 1.2.1:
Title: Thank you
Comment: Dear Reviewer DugZ, thank you again for your helpful comments and for reviewing our response! -Authors | Summary: The paper presents an approach for solving visual reasoning tasks across multiple domains such as 2D image QA and robotic object manipulation. The approach (Neuro-FOL) uses an LLM to generate a first-order logic (FOL) program that is executed with a combination of learned and predefined modules. One aim is to take advantage of a pre-trained LLM (e.g., GPT 3.5) to generalize to new visual reasoning tasks at inference time. Neuro-FOL is evaluated on a diverse set of benchmarks including the CLEVR 2D VQA tasks, ReferIt3D 3D spatial reasoning tasks, HumanMotionQA tasks, and the CLIPort object packing tasks. Results demonstrate that Neuro-FOL is competitive with and sometimes outperforms strong baselines.
Strengths: - The problem of grounding concepts across different domains (e.g., 2D images vs. 3D scenes) is an important problem because many concepts are naturally defined across these domains. This problem is well motivated in the introduction.
- The ability of Neuro-FOL to learn new concepts that are generated by the LLM during training is neat. Practically speaking, this should save designers the time required to define these concepts.
- The paper evaluates Neuro-FOL on a diverse set of domains including 2D image QA, 3D scene QA, motion capture QA, and robotic manipulation, which demonstrates the generality of the proposed approach. Furthermore, Neuro-FOL has strong performance in all tasks, suggesting that it will similarly do well in other domains.
- The paper is well written and easy to follow, but some details are missing as discussed below.
Weaknesses: - The abstract and introduction start with the grand vision of grounding the concept “left” in multiple domains. However, from the experimental evaluations, it is unclear if such “domain-independent” reasoning is (a) learned and (b) useful.
- (a) Are concepts learned in a domain-independent manner? In other words, is Neuro-FOL jointly trained on datasets from multiple domains? These details are not clearly stated in the current manuscript.
- (b) The experimental results do not clearly demonstrate if “domain-independent” grounding is useful. A specific experiment to test this would be training Neuro-FOL independently on different domains and then comparing the performance of Neuro-FOL training jointly. Such an experiment does not appear in the paper. Thus, I do not see support of these claims.
- The proposed approach differs from LLM + API based methods (e.g., Gupta and Kembhavi, 2023) in two ways: (a) the use of first-order logic and (b) not requiring a predefined set of modules (i.e., concept networks in Neuro-FOL). It is unclear which of these two design concepts is most important. We can imagine a world in which (a) does not matter but (b) is critical. Alternatively, both (a) and (b) might be important. The experimental evaluations do not disentangle these two features. Thus, it is unclear where the important novelties in the method lies. Note: the ViperGPT experiments do not disentangle this question. Thus, it would be helpful if the authors could comment on this further.
- While learning concept MLPs through Neuo-FOL training has the benefit that concepts do not need to be predefined, this may also be a disadvantage because there may only be a small amount of data for a given concept. For example, a given dataset may have many questions asking about “left” and “right” and may only have a few asking about “in between” or “sandwiched between”. It is unclear how the proposed approach would generalize to such situations.
- L257: “Notably, Neuro-FOL is able perform on all domains, instead of requiring specific models for each setting.” This claim is misleading, as components of Neuro-FOL are trained for specific domains.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. How does Neuro-FOL performance compare when training jointly on multiple domains vs. independently on individual domains?
2. Are both (a) using FOL and (b) not having a predefined set of concepts important?
3. How does Neuro-FOL perform on concepts that are not well represented in the training data?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are briefly discussed, but additional details could be provided (perhaps based on answers to the questions above). The broader impacts are discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q: Domain-independent reasoning & joint training.**
A: We realize that there may be confusion in our descriptions about "domain-specific" vs. "domain-independent" and "reasoning" vs. "grounding." Here, we clarify that Neuro-FOL is a framework of "domain-independent reasoning" (composed of an LLM interpreter and FOL executor) and "domain-specific grounding." Concretely, we learn the grounding of the word "left" separately in different domains, because they are grounded on different modalities: 2D, 3D, temporal sequences, etc. We agree that learning a single representation for the concept "left" that is shared across different modalities is an interesting and important direction.
To directly answer your questions, (a) concepts are learned through a “domain-specific” grounding module based on input modality, and (b) we do not conduct “domain-independent” grounding, instead we conduct domain-independent *reasoning* through a LLM interpreter and FOL executor. All experiments are trained on different domains, but Neuro-FOL enables the same domain-independent reasoning mechanism to work on all tasks.
We do not conduct joint training across domains as (1) the domain-independent LLM interpreter and FOL executor are not trained, and (2) the domain-specific grounding modules can not be easily shared, as at the end they operate on different modalities. Because Neuro-FOL can be trained, a promising future direction is to finetune the LLM if we had access to the model weights and sufficient compute, and hence improve program generation and transfer reasoning across domains. We will clarify this in the main text!
In the global response, we have also added code samples that highlight how Neuro-FOL’s shared domain-independent components enable flexible adoption of neuro-symbolic learning for new domains and tasks.
**Q: Difference from LLM + API methods & importance of (a) usage of FOL and (b) no requirement of predefined concepts.**
A: Thank you for suggesting the disentanglement between the "reasoning primitives" and "learnable concepts!" The most important contribution we want to highlight in this paper is indeed (b). In many domains (e.g., human motion and robotic manipulation), it is important to have systems that can learn new concepts from available data outside of predefined functions, as mentioned in your review. Our system allows for this, and learns new concepts instead of being constrained to an inference only system. Regarding (a), it is definitely possible to extend our language with more primitives (e.g., the ones used by ViperGPT and VisProg). In this work, we chose FOL primarily because it is general, expressive, and allows us to build an end-to-end differentiable computation graph while executing the programs across all domains.
Please also see our added experiments on VisProg in addition to ViperGPT in the global response that analyzes differences between LLM + API baselines and Neuro-FOL.
**Q: Sufficient data for concept learning & performance on not well represented concepts.**
A: Thank you for bringing up this point! We agree that Neuro-FOL may perform poorly when there is insufficient data for training. For example, in the ReferIt3D task, the concept "clothes" is not well represented and appears only 18 times in the train set, and its classification accuracy in the test set is 0.207; concept "cart" occurs 84 times with accuracy 0.320. By contrast, more commonly seen concepts such as "couch" occur 3640 times with accuracy 0.920, and "sink" occurs 3028 times with accuracy 0.892. We note that this is not a disadvantage particular to our system, as all learning systems will face this problem when they are trained solely on the given dataset. We provide three main perspectives to this problem.
First, Neuro-FOL is notably more data efficient than end-to-end methods due to its ability to decompose learning into modular networks and generate more training data for each individual concept. For example, given one training datapoint each for “rainbow fire hydrant” and “rainbow sink”, end-to-end methods may have one datapoint for each of the two rare concepts, while Neuro-FOL will have two datapoints for “rainbow”, and potentially more data from “fire hydrant” and “sink” learned elsewhere. This is partially reflected in the data efficiency comparisons in Table 4 and 5.
Second, to partially mitigate this issue for Neuro-FOL, we instruct the LLM interpreter to canonicalize the concepts when reasonable, hence concepts such as “in between” and “sandwiched between” may both be merged to the “between” concept by the LLM.
Third, it is possible and straightfoward to incorporate externally trained models in Neuro-FOL. We can integrate pretrained models as domain-specific grounding modules, for example, directly plugging in Grounding DINO [1] for object detection. For some tasks, there are no off-the-shelf general visual recognition models (e.g., relations, 3D concepts, human motion). Therefore, we train our concept groundings by learning solely from the dataset. We definitely agree that how to better integrate other pretrained models by finetuning, or by incrementally learning new concepts, is important. Thank you for the feedback! We will clarify this in an extended discussion section in the main text.
[1] Liu, Shilong, et al. "Grounding DINO." 2023.
**Q: Clarification of Neuro-FOL’s ability to perform across domains without requiring specific models.**
A: Thank you for catching this, we will update this to without “requiring domain-specific *reasoning modules*”. Our intended goal was to highlight that Neuro-FOL’s domain-independent LLM interpreter and FOL executor are re-used on all domains, in contrast to prior works which require new models to be defined from scratch.
**Q: Limitations.**
A: We agree that an extended discussion section tackling limitations and future work will be helpful! We will add this in the main text.
---
Rebuttal Comment 1.1:
Title: Happy to answer any further questions
Comment: Dear Reviewer B8BR,
Thank you for reviewing our submission. We have posted our response per your suggestions and questions. We are happy to discuss with you and answer any further questions. As the deadline for discussion is approaching, we very much look forward to your feedback.
Thank you,
Authors | Summary: This paper proposes an approach for general-purpose language grounding across a variety of domains and tasks. The approach first uses an LLM to generate a domain-general program, which can be implemented across different domains using domain-specific functions, represented as neural networks (and domain-general implementations of certain FOL predicates). Experiments are conducted on a variety of domains, including image understanding, video understanding, and robotic manipulation.
Strengths: The approach is compelling and relatively original -- it is especially original in combining a single unified formal representation across a variety of tasks. It is obviously addressing an important problem, and having modular approaches like these is important for investigating failure cases.
The approach is quite clearly detailed, except for details in training.
Experiments are very thorough. I appreciate evaluating on human data. I would have liked to see more analysis in model performance.
Weaknesses: My main concern is that modularizing the approach into essentially two components: (a) language -> FOL, and (b) FOL + perception -> answer/action, is glossing over nuances of language use that may require interaction between the language and perception aspects. For example, the implementation of "left" may be dependent on the identities of the objects participating in the spatial relationship: there may be a canonical "left" side of an object, given its affordances (e.g., the "left" of a fridge may be the area to its left when one is standing facing the fridge, regardless of the actual perspective being taken in the 3D question answering task); it seems the current approach would not be able to handle this context-dependence of "left", unless the domain-specific implementation of "left" was somehow able to encode all of the information necessary to make this inference (that a particular region is a fridge, and that a fridge has a canonical "left"). Another place where this comes up is that the meaning of adjectives can be highly context dependent in natural language use: e.g., the meanings of color terms depend strongly on context (as a specific example: let's say there is a very pale blue object and a very pale red object -- when hearing "blue", one would most likely choose the pale blue object. But if the same very pale blue object is paired with a more vibrant blue object, "blue" is most likely going to refer to the more vibrant one). In general, I'd be interested to hear discussion about this particular problem and how (if) the proposed approach could implicitly capture these more direct relationships between language use and perception, or if FOL acts as a bottleneck which removes any ambiguity in sentence meaning (it seems this is the goal in its design -- even the program \iota x blue(x) ^ sphere(x) in L132 may be ambiguous if there are multiple blue objects of varying shades!).
I also would have liked to see more in-depth analysis of the approach. In particular, what's left for tasks where there is still a lot of room for improvement -- basically everything except CLEVR? Because the approach is modular, it seems there is a lot of room for analysis in how different parts of the approach might be failing -- are the programs incorrect? (What causes a program to be inexecutable?) Are the learned domain-specific functions erroneous?
Small nitpicks:
* Having a consistent example in Figure 2 would be nice.
* I'd suggest putting the FOL programs in Figure 3
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Is using MLP implementations of domain-specific functions limiting in some ways? I.e., are there functions which, in experiments, are more difficult to train than others?
* Are the implementations of the FOL primitives hardcoded (e.g. as "min" for "and")?
* How are the domain-specific functions trained? There are few details on the training process, but I am assuming as it is end-to-end differentiable that they are trained using LLM-generated programs on some training data
* How is the arity determined for concepts? Is it just "proposed" by the LLM during training?
* I was confused about the presentation of Table 4 and 5 -- what is a data efficiency setting? Is this just the amount of training data used (presumably to train the domain-specific functions)?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There wasn't much discussion of limitations. I would have liked to see discussion wrt. analysis of the proposed approach, and the potential limitation of completely separating the language and perception aspects of these tasks by a single FOL program.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q: Language and perception.**
A: Thank you for bringing up these examples! There are indeed nuances that require interactions between language and vision. Similar to what you suggested, there are two candidate approaches: 1) interpreting language semantics in a context manner, by considering, e.g., the comparison classes for certain adjectives, and 2) relying on grounding modules for learning context-dependent grounding. In this paper, we choose 2) because our main focus is on integrating the unified reasoning framework and domain-specific concept learning. In general, we think option 2) has the advantage that the language interpretation part can be solely trained on text data, which is of a larger scale than vision-language data.
Reference frames in spatial relations are handled in our work explicitly — they are specified in instructions in our 3D dataset. For example, given “Facing the cabinets, pick the object to its left”, Neuro-FOL generates `view(λx: cabinets(x)) and iota(λx: object(x) and left(x, iota(λy: cabinets(y))))`
and the grounding of concepts will explicitly model the viewpoint of the agent. The view function is learned as a generic high-arity function, instead of specific ways to transform the point cloud. You are correct the semantics of “left side” is object-specific. Neuro-FOL will learn a “left_side” concept that captures a canonical left side of the object from the data distribution, since the relation grounding module takes object features as input too.
Contexts (e.g., comparison classes) in adjective grounding is another important aspect. In the color example, Neuro-FOL can partially handle this by modeling the "degree" of whether each entity satisfies the concept. For example, to find *the* blue object,” the FOL execution assumes only one single object will be selected via "softmax", which partially handles the ambiguity. Concretely, suppose we have three objects, a pale blue, pale red, and vibrant blue object. To find *the* blue object, it would choose the “bluest” one—the vibrant blue object if compared to both others. This will also work to a certain extent, for compositional concepts such as `blue(x) ^ sphere(x)`. In other cases, since our object encodings do contain contextual information; therefore, most of the time we rely on learning from datasets to resolve such grounding.
In sum, Neuro-FOL relies on context-dependent grounding modules to handle these cases. Though we do not build in pragmatic reasoning frameworks such as RSA [1], it is a direction we are excited about. We are very interested in tasks with ambiguous language and extending Neuro-FOL to be truly probabilistic, modeling speaker intents, and leveraging feedback from perception execution. Thank you again for the great feedback!
[1] Goodman, Noah D., et al. Pragmatic language interpretation as probabilistic inference. Trends in cognitive sciences (2016).
**Q: Analysis of Neuro-FOL.**
A: Indeed, Neuro-FOL’s modularity enables error attribution. For more difficult tasks (3D, human motion, robotics), the FOL programs are typically correct, but the domain-specific functions are more difficult to learn. For example, in ReferIt3D, there are 607 categories, with many classes visually similar (e.g., cabinet and dresser). We see that Neuro-FOL classification accuracy for concept grounding is not very high. Hence, we can potentially rely on more classification labels, instead of only language instructions, to improve performance on this task.
For syntax errors in LLM-generated programs, we can easily detect them and resample new programs. However, there is currently no way we can recover from semantic errors in the programs. We currently use GPT-3.5 in our experiments, but anticipate that GPT-4 would yield stronger performance; however, we were not able to leverage GPT-4 due to cost. Additionally, a promising future direction is to finetune the LLM from execution feedback.
**Q: MLP implementations.**
A: Thank you for bringing up this point! We do assume that MLPs can realize all domain-specific functions, as they take learned features as inputs, which are also jointly trained. This can occasionally be limiting if some functions are significantly more difficult to learn. In HumanMotionQA, for example, “action” concepts yield accuracy 0.637, while “body part” concepts only yield 0.495. Actions involving motion cues in multiple frames are easier to learn.
We can definitely extend such MLP implementation to use more complex NNs, which likely require more training data. Additionally, one can initialize different networks for different concepts if they have prior knowledge on the task. We can also directly integrate pretrained visual models as domain-specific grounding modules.
**Q: Function implementations.**
A: Yes, Neuro-FOL requires a small but general set of functions for execution, listed in Table 1. These functions are either general logic and numeric operations (e.g., counting, exists), or functions that handle inputs and outputs (e.g., return a text description, execute an action).
**Q: Training process.**
A: Yes, the domain-specific functions are trained through backpropagation from the downstream loss functions, as our executor is fully differentiable.
**Q: Arity for concepts.**
A: Yes, both concept names and arities are proposed by the LLM. For example, given `above(x, y)`, we will treat `above` as a binary relation.
**Q: Data efficiency setting.**
A: Yes, the percentages in Tables 4 and 5 indicate what percentage of train data compared to the full dataset. For example, for Table 4, we train on 0.5% (329 examples), 1.5% (987 examples), etc., and show improvements over end-to-end methods.
**Q. Figure edits.**
A: Thank you for the suggestions! We will update the paper accordingly.
**Q: Limitations.**
A: We agree that an extended discussion section of limitations and future work will be helpful, and will add it to the main text incorporating all comments during review.
---
Rebuttal Comment 1.1:
Title: Happy to answer any further questions
Comment: Dear Review xPiC,
Thank you for reviewing our submission. We have posted our response per your suggestions and questions. We are happy to discuss with you and answer any further questions. As the deadline for discussion is approaching, we very much look forward to your feedback.
Thank you,
Authors
---
Rebuttal Comment 1.2:
Comment: Thank you for your rebuttal and apologies for not responding earlier. It has answered my questions and I would still like to see this paper accepted.
---
Reply to Comment 1.2.1:
Title: Thank you
Comment: Dear Reviewer xPiC, thank you again for your helpful comments and for reviewing our response! -Authors | Summary: The manuscript proposes a framework that leverages an LLM to map queries to executable first-order-logic programs, with domain-specific grounding functions. The manuscript includes experiments on multiple tasks and domains.
Strengths: The paper is mostly well-written.
The paper considers an interesting topic, under compelling methodology (i.e., neuro-symbolism).
The presentations of visualisations and results are mostly clear.
Weaknesses: Section 1 (Introduction) / Figure 1 — One has to be particularly careful in applications that involve some (other) embodiment, as concepts like “left” could adopt an alternative reference frame, beside just the egocentric one. How does the approach incorporate this reasoning?
Section 1 (Introduction; L49-51) — Whereas no domain-specific language is required at the reasoning level, there still needs to be lexical alignment with the concepts supported by the domain-specific grounding modules. Furthermore, the approach does still need predefined built-in functions (Table 1) and domain-specific concept names (L158).
Section 3.2 (L175-177) — Let’s be exceedingly explicit here. The manuscript states that the function implementations do not require manual definition in code. How are they generated? What are all the core FOL programs? What are their mappings from concepts initialized by the LLM?
Table 4 / Table 5 — The data percentages seem to be chosen arbitrarily. How were the percentages decided on? What happened for, e.g., 25% and 50%?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A — see above
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The manuscript does not include any sections on Limitations or Societal Impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q: Alternative reference frame for “left”.**
A: We thank you for bringing up this point! In domains where we need reference frames to disambiguate concepts like “left” (e.g., in 3D), the grounding of such concepts will explicitly consider the viewpoint of the agent — usually specified in the language query. For example, in the SR3D dataset, given the query “Facing the cabinets, pick the object to its left”, Neuro-FOL will generate the following FOL formula:
```view(Object, lambda x: cabinets(x)) and iota(Object, lambda x: object(x) and left(x, iota(Object, lambda y: cabinets(y))))```.
The view function is learned as a generic high-arity function, without requiring any specific implementation to change the actual viewpoint of the pointcloud. In domains where no viewpoint is specified in language, such as the 2D domain, we assume an egocentric reference frame. We will clarify this in the main text.
**Q: Clarifying concepts supported by grounding modules.**
A: We clarify that Neuro-FOL does not depend on a predefined set of lexical concepts and this is an important advantage of our system compared to earlier work on neuro-symbolic learning. To achieve this, we rely on LLMs to *automatically* extract concept names from natural language instructions in the dataset; Neuro-FOL then initializes domain-specific grounding modules (in our work, MLPs) accordingly to learn the grounding of these concepts from training data. In other words, the concepts and grounding modules are *not* predefined and can support execution of any given query. We note that *domain-specific concept names* means that these concepts are grounded to particular domains (e.g., 3D point clouds) and not shared across different datasets, but they are not manually defined.
Neuro-FOL does require a minimal but general set of predefined FOL functions. Important to our goal of a general neuro-symbolic framework without any predefined domain-specific functions, Neuro-FOL only requires the FOL functions in Table 1 to reason across 2D, 3D, temporal, and robotic manipulation domains. These functions are either general logic and numeric operations (such as counting, forall, exists), or functions that handle inputs and outputs (e.g., return a text description of object categories, or execute an action output by other modules).
For a new task, prior neuro-symbolic approaches require designing new domain-specific languages to support necessary operations and writing differentiable implementations for each function. In contrast, Neuro-FOL enables easy integration, while retaining all the benefits of neuro-symbolic learning. For a new task, we can simply define a neuro-symbolic system with the following code:
```
domain = make_domain(ALL_LANGUAGE)
parser = GeneralizedFOLPythonParser(domain)
executor = GeneralizedFOLExecutor(domain, parser)
scene_graph = SceneGraph{2D/3D/temporal}(THIS_INPUT_SCENE)
with executor.with_grounding(scene_graph):
parsing = parser.parse_expression(THIS_INPUT_TEXT)
execution = executor.execute(parsing)
loss = loss(execution, gt)
```
By passing in all language input, a scene (2D, 3D, temporal, etc), and a text query, Neuro-FOL will automatically define learnable domain-specific grounding modules, generate the FOL program, and execute domain-independent functions differentiably, such that you can simply apply any downstream loss as you would an end-to-end method. We believe this is an exciting improvement over prior works that will increase accessibility and flexibility of neuro-symbolic learning. We will clarify this in the main text!
**Q: Clarifying function implementations.**
A: Here, what we mean is, for all concepts extracted by LLMs, we use a generic mechanism to define the corresponding grounding module. For example, if a concept is unary (e.g., ```red(x)```), the grounding module will be a MLP layer that maps object representations to a scalar in [0, 1]. For concept ```beside```, the LLM specifies that the concept takes in features of two entities, and hence Neuro-FOL creates an MLP that operates on binary features.
More specifically, given the LLM-interpreted program ```exists(Object, lambda x: dresser(x) and beside(x, iota(Object, lambda y: cabinet(y)))```, ```exist``` and ```iota``` are domain-independent FOL functions, which take as argument outputs from domain-specific functions of ```dresser```, ```cabinet```, and ```beside```. The FOL functions have built-in implementations and are listed in full in Table 1, with implementations in Appendix A.1. The domain-specific concept grounding functions do not require manual definitions in code, and their implementations are generated using the generic mechanism described above. We will clarify in the main text.
**Q: Data efficiency percentages.**
A: For Table 4 and 5, we followed experiment settings of data percentages reported in prior state-of-the-art neuro-symbolic methods — NS-CL and NS3D. In Table 4, with 10% of train data, only small improvements upon end-to-end methods were seen, and hence additional percentages were not considered. For Table 5, we similarly followed prior work; due to your suggestion, we have additionally added experiment results for 25% and 50%. As expected, our work shows larger improvement upon MAC at more data efficient settings.
| | Pref-def. | 10% | 25% | 50% | 100% |
| ----------- | ---------- | ----- | ----- | ----- | ----- |
| NSCL | Yes | 0.989 | 0.992 | 0.994 | 0.992 |
| TbD-Net | Yes | 0.541 | 0.560 | 0.912 | 0.991 |
| MAC | No | 0.673 | 0.940 | 0.974 | 0.989 |
| Neuro-FOL | No | 0.941 | 0.991 | 0.992 | 0.996 |
**Q: Limitations and societal impact.**
A: We include discussion in Section 4.2, and broader impacts in Appendix A.3. We agree that an extended discussion section tackling both limitations and broader impact will be helpful! We will add an additional section in the main text taking into account all comments.
---
Rebuttal Comment 1.1:
Title: Official comment by Reviewer 2ce2
Comment: I appreciate the authors' detailed responses to all reviews. I have no further comments, and I am satisfied with the answers to my questions; I will increase my score by one point.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you again for your helpful comments! We are glad to hear that the concerns have been addressed. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their constructive feedback! We have added additional data efficiency experiments on CLEVR, baseline results from VisProg & ViperGPT for comparison to Neuro-FOL, statistics and analyses of Neuro-FOL performance from the HumanMotionQA and ReferIt3D tasks, and code examples to exemplify flexibility of Neuro-FOL usage.
Many reviewers suggested extending our discussion section in the paper, with which we fully agree. We will add a lengthened discussion section in the main text, with focus on limitations, future work, and broader impact, integrating all comments from your reviews.
Additionally, we’d like to highlight Neuro-FOL’s contribution over prior neuro-symbolic approaches. The existing methods require predefined “oracle” domain-specific functions for a given domain and task. For a new task, this means designing new domain-specific languages to support necessary operations and writing differentiable implementations for each function. This requires both domain-specific knowledge as well as neuro-symbolic background. In contrast, Neuro-FOL enables easy integration without either of the above requirements, while retaining all the benefits of neuro-symbolic learning. For a new task, we can simply define a neuro-symbolic system with the following code:
```
domain = make_domain(ALL_LANGUAGE)
parser = GeneralizedFOLPythonParser(domain)
executor = GeneralizedFOLExecutor(domain, parser)
scene_graph = SceneGraph{2D/3D/temporal}(THIS_INPUT_SCENE)
with executor.with_grounding(scene_graph):
parsing = parser.parse_expression(THIS_INPUT_TEXT)
execution = executor.execute(parsing)
loss = loss(execution, gt)
```
By passing in all language input, a scene (2D, 3D, temporal, etc), and a text query, Neuro-FOL will automatically define learnable domain-specific grounding modules, generate the FOL program, and execute domain-independent functions differentiably, such that you can simply apply any downstream loss as you would an end-to-end method. We believe this is an exciting improvement over prior works that will increase accessibility and flexibility of neuro-symbolic learning.
Based on reviewer suggestions, we also ran experiments on VisProg in addition to the ViperGPT, in order to better analyze differences between LLM + API baselines and Neuro-FOL. Our method significantly outperforms baselines that leverage programs from LLMs, as Neuro-FOL has less predefined functions and more learned programs.
| | CLEVR-Ref | CLEVR-Puzzles | CLEVR-RPM |
| ----------- | ------ | ------ | -----|
| Neuro-FOL | **0.94** | **0.75** | **0.87** |
| Flamingo 4-shot | N/A | 0.54 | 0.52 |
| Flamingo 8-shot | N/A | 0.57 | 0.52 |
| VisProg | N/A | 0.27 | 0.51 |
| ViperGPT | N/A | 0.34 | 0.04 |
We highlight some examples where Neuro-FOL improves upon these methods. For example, ViperGPT does not contain predefined programs for “left” and “right”, hence, when running inference, the LLM produces Python code to compare the pixel coordinates in order to find such relations, which does not take into account depth of the scene, sizes of the objects, etc. By contrast, Neuro-FOL is able to learn programs for “left” and “right” respectively for more faithful execution. As an additional example, VisProg often passes in full phrases to its predefined methods, such as ```LOC(image=IMAGE, object='small yellow metal cylinder')```, instead of composing modular concepts for "small", "yellow", "metal", and "cylinder" as Neuro-FOL does, which allows for better error attribution. In addition, for more complex domains such as 3D and temporal human motion sequences, it is unclear what predefined models should be used for LLM + API methods.
We thank you for all the helpful comments, and am happy to answer any follow-up questions! | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Belief Projection-Based Reinforcement Learning for Environments with Delayed Feedback | Accept (poster) | Summary: This work proposes a solution to the delayed feedback problem, in this case a fixed timestep delay. There are approaches to solving this problem in the literature, often associated with cumulative error or un-traceability given longer delays. One common approach — using augmented states by concatenating observed states & actions taken since visiting the delayed state — which grows exponentially the augmented state space as the delay increases. The authors’ algorithm (Belief Projection Based Q-Learning, BPQL) is formally shown to equally represent augmented state-based (Q-)Values. Further, the introduced belief-projection of those values is shown to be decomposable over the original states + residuals, therefore working on the original state-dimensionality rather than the exponential one of augmented state learning. The values are shown to be linearly approximable, an insight, which the authors then use to reformulate the discrete version into a continuous control version of a Belief-based Actor-Critic Algorithm, which outperforms naive/ augmented (-model) approaches of the soft-actor-critic in an delayed version of the MuJoCo benchmark.
Strengths: - The approach is formulated on a quite technical level and appears to be a novel application of belief-based ML to the delayed feedback problem. There is related work in different directions cited for solving delayed environments (augmented approaches, model-based predictions, expectation minimization Q-Learning), although I am not very familiar with the augmented techniques, with are directly compared against here. The reasoning towards the belief based Q-Learning Actor-Critic seems to differentiate enough from Agarwal&Aggarwal2021, which they cite as closest related work.
- The mathematical reasoning seems reasonable as far as I can follow. The underlying constant delayed MDP and the assumptions are stated clearly. Furthermore, MuJoCo represents an appropriately difficult evaluation domain and the evaluation is sound, although there is no compute resource or training-time mentioned (as far as I can tell).
Weaknesses: - The paper seems to be missing a discussion on the related concept of belief-state architectures, which may be considered related to the model-based approaches mentioned here but with more relevance to this chosen approach.
- One point of concern I have with the claimed novelty compared to Agarwal&Aggarwal2021 in the difference of developing a continuous control algorithm, rather than a discrete model. I find it confusing that much of the definition of the BPQL / Actor-Critic model revolves around Q- Values and Q-Learning (the Q-Function, as it is called here) which are an inherently discrete concept The authors mention a generalization to continuous control problems, but the transfer l.228-224 reads more like an afterthought to position the algorithm in a niche without direct comparison benchmarks. If most of the work covers the Q-Learning approach it would be informative to proof performance in this approach compared to e.g. Agarwal&Aggarwal2021 or the DDQN Model-based approach of van Hasselt. et. al. 2016 before evading into the novelty of continuous problems.
- Furthermore, the paper is lacking a motivation on why belief-based projection is an appropriate technique and was chosen here (aside from not dealing with exponentially growing state-spaces, which is repeated ... often enough). The reader is presented with a very final formulation of the belief matrix and belief projection operator initially, with the following pages re-contextualizing the known MDP / Bellman Equation into this framework. Without a background in control theory or kernel operators (and given the lack of references) I assume this part to be proposed by the authors or established theory, but without any descriptive intuition behind this formalization, much Chapter 3 reads like a proof that would be usually found in the Appendix. For a clearly readable publication I find this submission is written without the reasoning to convey its key-ideas behind the formalities to a broader audience.
- Given that the evaluation is based on the AC method, I would have preferred to have the AC technicalities in the paper itself rather than the Appendix. Some of the implementation details I found helpful to reason about the evaluation results were found in the Appendix and only the code itself. However, given the technical nature of the paper I would understand why they were cut.
-
I feel conflicted about claiming significant performance gains against the mostly self-constructed SAC baselines with ‘only’ 1M training steps on a complex domain like MuJoCo. The evaluation does seem cleanly run and favorable for the BPQL approach (at least in terms of sample efficiency), however, I would argue that only 1M steps is a debatable length for some of these problems to be solvable by the intentionally slow SAC algorithm (regardless of hidden-dim size, not even accounting for delay complexity). Out of the three baselines, the naive SAC is akin to the random baseline and the adjusted model-DDQN baseline was not originally designed for a SAC and is therefore hard to judge in terms of comparability. As mentioned above, it seems as if this work is introducing a Q-Value approach and evaluating on continuous control problems that are a short transfer away, but is not comparing to established discrete solutions to this problem. If better benchmarks for this specific environment niche (continuous control delayed feedback) are not available, it would be nice to see the theoretical maximum reward per domain or other relational metrics that would indicate how the BPQL is performing in general, not just compared to the benchmarks. Apart from the baseline evaluation, the belief-projection application seems like an improvement in terms of computability and therefore, sample efficient training for delayed environments.
- While the BPQL approach may be employed to any fixed step delay environment, with the initial setting of ‘uncertain latency’ that is motivated in the introduction, the authors are correct in listing this fixed-step assumption as a notable limitation. Especially in the continuous domain case (which is the differing focus of this work), having one (small) continuous, fixed delay signal is quite the niche. Nevertheless, given the defined setting the proposed algorithm shows promise in it’s intended purpose.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Aside from concerns mentioned above, the following questions arise:
- Why is the belief-projection chosen here and how would you motivate this decision?
- Why was the SAC algorithm chosen as the baseline, how was it parameterized and how would common baselines like A2C or PPO compare in this setting?
- What is the runtime overhead of computing the belief-projection compared to the (technically simpler) augmented state techniques?
- Would it be possible to show e.g. memory-complexity of the baselines and of BPQL as a metric in the specific case of MuJoCo domains here? L.225ff mentions that the algorithm also stores augmented states (original states and action history) in an extra temporary buffer, could you please contextualize this to augmented state learning? (As mentioned above, this would also be information to include in the compute-resource / runtime statement.)
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Assumptions and limitations are stated in the context of the formalization. No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer ozYa,
We want to sincerely thank you for the valuable effort you put into providing constructive feedback. We have carefully considered each of your comments, and provide additional details as follows:
**Novelty**
In addition to its applicability in continuous domains, BPQL has several distinct aspects compared to Agarwal&Aggarwal 's method (EMQL). In BPQL, the Q-function is evaluated using a reduced state size, while its policy uses an augmented state size to choose actions. Thus, the state sizes for the policy and Q-function are different, a **unique** feature not found in pure Q-Learning approaches like EMQL. Additionally, EMQL relies on a count-based model, making it challenging to apply in cases with large state and action dimensions, even in discrete environments. However, BPQL overcomes this issue by **not relying on an explicit model**, making it a truly "model-free" approach, in contrast to EMQL's "model-based" approach.
**Discrete environments**
>"it would be informative to proof performance in this approach compared to e.g. Agarwal&Aggarwal2021 or the Model-based approach ... before evading into the novelty of continuous problems."
In the field of _robot learning_, latency is well-known for degrading performance. To address this issue, we developed BPQL, specifically designed for continuous robotic domains. Nevertheless, we agree that comparing our algorithm's performance with other algorithms in discrete domains would effectively demonstrate the generality of our approach. Therefore, we conducted an additional experiment with EMQL on discrete environments in OpenAI Gym. For this experiment, we implemented BPQL using the discrete version of SAC . The results are as follows:
|CartPole-v0|d=2 / Epi=200|d=2 / Epi=1000|d=4 / Epi=200|d=4 / Epi=1000|
|-|-|-|-|-|
|EMQL|<= 80|<=150 |<=70|<=125|
|BPQL|169 ±13|200±0|109±8|200±0|
We also conducted another experiment to compare the performance with Delayed-Q (Derman et al.,2021) on discrete environments.
|Cartpole-v1|Delayed-Q|BPQL|
|-|-|-|
|d=15|414±14|**464±60**|
|d=25|324±7|**367±94**|
|**Acrobot-v1**|**Delayed-Q**|**BPQL**|
|d=15|**-211±53**|-246.8±206|
|d=25|-351±57|**-335.3±202**|
(_± denotes std._)
BPQL's strong performance demonstrates its potential not only in continuous but also in discrete settings.
**Longer Interaction**
>“However, I would argue that only 1M steps is a debatable length ... ”
To address this, we extended the evaluation to 2M and 3M steps. Model-based SAC didn't improve beyond 1M steps, but Augmented SAC showed notable improvement with more interactions due to its theoretical error-free TD learning. However, BPQL still outperformed it.
|d=9|HalfCheetah-v3||Hopper-v3||Walker2d-v3||
|-|-|-|-|-|-|-|
|steps|2M|3M|2M|3M|2M|3M|
|Augmented SAC|1362±40|1463±51|2476±783|2821±627|2970±897|3223±841
|BPQL|5980±321|6214±108|2983±834|3120±561|4368±701|5130±472|
**Relational metric**
>“It would be nice to see the theoretical maximum reward per domain or other relational metrics ...”
We agree that adding a relational metric in the comparison table would improve the clarity of our experiment and provide a better understanding of the impact of delays on the algorithms. Hence, we examined the performance of SAC in a "delay-free" environment as a comparison metric. The performance of delay-free SAC is as follows:
|(1M steps)|HalfCheetah-v3|Walker2d-v3|Hopper-v3|Swimmer-v3|InvertedPendulum-v2| Reacher-v2|
|-|-|-|-|-|-|-|
|SAC|11202±423|4733±728|3145±429|91.2±12|1000±0.0|-3.8±0.6|
**Questions**
>**Q 1.** Why is the belief-projection chosen ... motivate this decision?
We chose belief projection (BP) to find the best latent space representing the augmented state. In ML literature, dimensionality reduction methods like PCA and VAE have shown significant benefits for training. To leverage these advantages in RL, we incorporated a projection-based state reduction method into our algorithm. Additionally, as we discussed in Line 134-137, we assumed that similar successive transition probabilities imply similar meanings for the augmented states. Thus, we selected BP as the state reduction method.
>**Q 2.** Why was the SAC algorithm ... compare in this setting?
SAC is one of the SOTA algorithms used in continuous tasks, usually outperforming PPO in the MuJoCo control tasks (Haarnoja et al. 2018). Therefore, we integrated SAC into BPQL. And the policy is parameterized to represent the squashed Gaussian distribution.
>**Q 3.** What is the runtime overhead ... ?
The dominant part of both algorithms in the runtime is the learning process (GPU load + neural network training). We evaluated the Monte Carlo estimation for the learning process:
||Augmented SAC|BPQL|
|-|-|-|
|Learning time (1 iter.)|8.98 ms|9.32 ms|
|||
with the following resources:
* CPU: i7 9700K
* GPU: GTX 1080 TI
* RAM: 48GB
The experiments were conducted in the HalfCheetah-v3 with a delay of 9. BPQL uses a Q-function network with fewer parameters than Augmented SAC, yet it incurs 3% runtime overhead. This is likely due to the increased loading of transition tuples to the GPU from BPQL's temporary buffer.
>**Q4**. Would it be possible to show e.g. memory-complexity ...
Let M be the maximum size of the replay memory, A be the dimension of the action space, d be the delayed timesteps, and S be the dimension of the state space. Then the memory-complexity for Normal SAC is $O(M(A+2S+2))$, and for Augmented SAC, it is $O(M(A+2(\underbrace{S+dA}_{\text{augmented state}})+2)+Ad)=O(M(A(1+2d)+2(1+S)))$. The terms $Ad$ represent the space required for storing action history to create augmented states. BPQL adds two original-sized states to the tuple used in the replay memory of Augmented SAC, resulting in a complexity of $O(M⦁(A+2S+2(S+dA)+2)+2(S+Ad+1))=O(M(A(1+2d)+2(1+2S)))$. The term $2(S+Ad+1)$ represents the space required for constructing the temporary buffers. Both are omitted in the Big O notation as M >> Ad and M >> 2(S+Ad+1).
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal.
Comment: Thank you very much for this extensive reply and for taking the time to run these additional experiments. The additional insights regarding the the performance of BPQL in the discrete setting and the comparison to other works make a nice argument in favor of this work and will be taken in consideration. I hope these new results are included in the paper in some form. I also agree with the other reviewer and would very much like to see the code open-sourced for the final publication. I will adjust the review accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you for the constructive suggestion!
Comment: We deeply appreciate the reviewer for providing the constructive suggestion and updating the score!
We agree that the experimental results of the discrete environment will further support the promise of the proposed algorithm. Therefore, we will incorporate these experimental results into our paper. Additionally, it's our pleasure to share our source code, and we look forward to our work contributing to the RL and robot learning communities. Thank you again for the valuable suggestion and for putting effort into reviewing our paper. | Summary: The authors propose a projection approach for more compactly representing the value function of a delayed system in reinforcement learning. A complete algorithm based on SAC with neural network approximations is presented and benchmarked on four Mujoco environments where it outperforms competing approaches.
Strengths: - The paper addresses an important problem when applying RL on real-world systems: latency. This is a common issue in robotics in particular. As such, a good solution would be of considerable value to the robot learning community.
- The paper is well-written and mostly easy to read
- The approach demonstrates impressive improvements on delayed systems in Mujoco benchmarks
Weaknesses: - The experiments are only on four rather simple Mujoco control benchmarks. It would have been nice to have something more realistic, but I think the results were clear enough to establish the promise of the approach.
- A few important details could be more clear (see questions below)
Side note: Code is included but not publicly released. As the RL community has struggled with reproducibility due to small discrepancies in implementations, this would have been nice.
Minor:
- Not sure what you mean by "agnostic random delayed environment" in the conclusions? If you just mean randomly delayed environments, that sentence can be simplified a lot.
------------
After rebuttal:
The authors addressed all of my concerns and therefore I raise my score to a 7. This paper could be very useful to better solve delayed control problems, which I can attest to being common in robotics. That said I agree with Reviewer 3qfv that the method sections were needlessly difficult to read. However, the authors promised to add some intuitions earlier in the final manuscript, which I think will be good enough. I thank the authors for their work and encourage them to open source their code so there is less room for misunderstandings by future work on this important problem.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) The text is a bit vague on what added assumptions / complexity there is of the beta q-function representation. Initially it seemed like the state transitions/beta matrix would have to be estimated separately in (10) where a linear projection was used. Later a "practical approximation" is introduced where it seems the beta Q-function can be updated iteratively via sampling as normal in Q-learning? Does this preserve optimality or is it an approximation (further approximation on top of regular NN SAC). If not optimal, can you clarify what added assumptions there actually are?
2) Why is it performing better even on the noisy environment without delay? This is not obvious from the claimed contributions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: To be clarified, the method seems very general.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer iPGm,
Thank you very much for your positive review and for acknowledging our contributions. We have carefully read your comments, and we would like to provide additional details as follows:
**Additional experiments on a realistic control task**
>“The experiments are only on four rather simple Mujoco control benchmarks. It would have been nice to have something more realistic, but I think the results were clear enough to establish the promise of the approach”
In addition to our experimental results, we performed an industry-oriented simulation task _Pusher-v4_ ;a collaborative robot to further validate the practicality of our algorithm. "Pusher-v4" is an environment from OpenAI Gymnasium, which features a multi-jointed robot arm resembling a human arm. The goal is to move a target object to a specified goal position using the robot's end effector.
We conducted experiments in the Pusher-v4 environment where 200 ms and 500 ms sensing delay exist. The results are shown in the table below.
||delay=200 ms|delay=500 ms|
|-|-|-|
|Baseline (SAC w/o delay)|-21.0+-0.6|-21.0+-0.6|
|SAC|-88.5+-10.9|-166.0+-3.9|
|BPQL|-22.7+-0.7|-24.8+-1.5|
||||
The results confirm the promise of BPQL, showing comparable performance to the delay-free environment (baseline SAC) despite the presence of delay.
**Algorithm details**
>“The text is a bit vague on what added assumptions ... Initially it seemed like the state transitions/beta matrix would have to be estimated separately in (10) where a linear projection was used.”
Yes, In the practical version, we do not estimate the belief matrix and transition probability separately to compute the beta-Q value. Instead, we incrementally estimate the beta-Q-values based on the samples obtained through interactions with the environment. This sampling-based learning is a key characteristic of reinforcement learning, and in this setting, Q-values are proven to converge (Tsitsiklis and Van Roy, 1996).
>“Later a "practical approximation" is introduced where it seems the beta Q-function can be updated iteratively via sampling as normal in Q-learning? “
Yes. The beta Q-function learns similarly to a normal Q-function, using "timely properly-aligned" transition tuples $(s_t, a_t, r_t, s_{t+1}). $ However, this is **not same situation**. Because the action $a_t$ is obtained from the policy with the augmented input ($s_{t-d}, a_{t-d}, ... , a_{t-1}$) but not $s_t$ (which is not observed yet due to the delay). Consequently, unlike the normal Q-function that learns true Q-values, the beta Q-values approximates (linearly) the true Q-value, and BPQL has a unique characteristic: the (beta) Q-function is evaluated using a reduced (original) size of state, while its policy uses an augmented state size to choose an action (Eq. 24, 25).
**Assumption behind the algorithm**
>“Does this preserve optimality or is it an approximation ... If not optimal, can you clarify what added assumptions there actually are?”
Unfortunately, neither the iterative method (Section 3) nor the practical version (Section 4) can guarantee optimality, even though the beta value converges. This is because the beta value is an approximation of the true value that ignores the residual (Eq. 11, 12). If an environment is extremely chaotic, the norm of this residual cannot be ignored smoothly, leading to a gap between the optimal policy and the learned policy.
In this paper, we **assume** that the linearly approximated value in most real-world continuous tasks can **sufficiently represent** the true value. Indeed, in various MuJoCo benchmark experiments, BPQL showed better performance than the baselines and even outperformed Augmented SAC, which theoretically learns the true "error-free" value.
**Optimality Analysis**
Additionally, using a property of linear approximator, an interesting error-bound can be derived as follows:
*Proposition.*
Let $\hat{V}^{*}_{proj.}$ be the unique fixed point calculated by repeatedly applying the combined operator
$\Pi_{\textbf{W}}\bar{\textit{T}}^{\bar{\pi}}$, and $V_{true}$ be the true value of the policy. Then,
$||V^{true}-\hat{V}^{\ast}_{proj.}||^2_W \leq \frac{1}{\sqrt{1-\gamma^{2}}}||V^{true}-\Pi_W V^{true} ||^2_W$.
*Proof.*
$||V^{true}-\hat{V}^{\ast}_{proj.}||^2_W$
$=||V^{true}-\Pi_W V^{true}||^2_W$ $+||\Pi_W V^{true} -\hat{V}^{\ast}_{proj} ||^2_W$
$=||V^{true}-\Pi_W V^{true}||^2_W+||\Pi_W V^{true}-\Pi_{W}\bar{T}^{\bar{\pi}}\hat{V}^{*}_{proj.}||^2_W$
($\because \hat{V}^{\ast}_{proj.}$ is the fixed point of the combined operator.)
$\leq ||V^{true}-\Pi_W V^{true}||^2_W + \gamma^2||V^{true}-\hat{V}^{*}_{proj.}||^2_W$
(by $\gamma$-contraction property of the combined operator.)
$\blacksquare$
This implies that the projection $\hat{V}^{\ast}_{proj,}$ is, at least, not far from the true value $V^{true}$.
**Noisy Environment**
>“Why is it performing better even on the noisy environment without delay”?
In the noisy inverted pendulum environment, the delayed timesteps are six, not zero. We speculate that the reason for your misconception might be due to the small font size in the figure. If that's the case, we sincerely apologize for the confusion, and the font size will be adjusted for better readability in our camera-ready version of the paper.
**Open-sourcing the code**
>“Code is included but not publicly released. As the RL community has struggled with reproducibility ... this would have been nice.”
Of course! We are thrilled and delighted to share our code to RL and robot learning community. Once our paper is published, the source code will be made accessible to everyone.
**Minor**
>“Not sure what you mean by "agnostic random delayed environment" in the conclusions? If you just mean randomly delayed environments, that sentence can be simplified a lot.”
Thank you for pointing this out. We agree that removing 'agnostic' from the sentence simplifies it significantly. we'll make the modification in the our final version of the paper.
---
Rebuttal Comment 1.1:
Title: Thanks for the additional details
Comment: I thank the authors for the additional details which resolved most issues.
I am happy to underscore that this is an important problem, delayed continuous domains are common in real-world control applications.
My only remaining concern, that I see was also raised by some of the other reviewers, is with the clarity of the final steps from the initial linear projection idea to the sampling-based implementation. You first introduce a closed-form linear projection but then end up with (24) which looks like regular SAC, except for the augmented states in the expectation. However, the the augmented states are not used in the actual Q-functions, only for the policy entropy term (which is related to exploration). The text also mentions "parameterized" beta Q-functions for the first time which is confusing and hints that maybe there is something more going in the Q_beta value updates than is obvious from (24).
---
Reply to Comment 1.1.1:
Title: Additional details for Equation (24).
Comment: We sincerely appreciate your enthusiastic engagement in the discussion! We hope that the derivation of Equation (24) becomes clearer through the following details.
**step 1. Regular Bellman backup**
First, Bellman backup for regular Q-learning can be expressed as the following equation:
$Q(\bar{s_t}, a_t)=R(\bar{s_t}, a_t)+\gamma E_{P(\bar{s_{t+1}}|\bar{s_t},a_t), \bar{\pi}(a_{t+1}|\bar{s_{t+1}})} [Q(\bar{s_{t+1}},a_{t+1})]. \qquad (1)$
**step 2. Bellman backup with linear approximation**
If we approximate this Q-function using a linear function approximator, Equation (1) can be transformed as follows:
$\beta^T\Phi(\bar{s_t},a_t) =R(\bar{s_t}, a_t)+\gamma E_{P(\bar{s_{t+1}}|\bar{s_t},a_t), \bar{\pi}(a_{t+1}|\bar{s_{t+1}})}[\beta^T\Phi(\bar{s_{t+1}},a_{t+1})], \qquad (2)$
where $\Phi$ is the feature vector and the $\beta$ is the corresponding parameters. Now our object is to find the parameters $\beta$ that best fit Equation (2) for the feature vector $\Phi$.
**step 3. Belief projection-based linear approximation**
In the case of belief projection, $\Phi$ equals $[P(s_1|\bar{s_t}), P(s_2|\bar{s_t}), ... , P(s_{|\chi|}|\bar{s_t})]^T$ and $\beta$ equals $[Q_{\beta}(s_1,a_t),Q_{\beta}(s_2,a_t), ... ,Q_{\beta}(s_{|\chi|},a_t)]^T.$ Therefore, Equation (2) can be rewritten as the following:
$E_{P(s_t|\bar{s_t})}[Q_{\beta}(s_t,a_t)]=R(\bar{s_t}, a_t)$+$\gamma E_{P(\bar{s_{t+1}}|\bar{s_t},a_t), \bar{\pi}(a_{t+1}|\bar{s_{t+1}})}$ $[E_{P(s_{t+1}|\bar{s_{t+1}})}[Q_{\beta}(s_{t+1},a_{t+1})]]. \qquad (3)$
To close the gap between the left and right sides of Equation (3), we minimize the following equation:
$E_{P(s_t,s_{t+1},a_{t+1},\bar{s_{t+1}}|\bar{s_t})}[Q_{\beta}(s_t,a_t)-\bar{R}(\bar{s_t},a_t)-\gamma Q_{\beta}(s_{t+1},a_{t+1})]^2. \qquad (4)$
($\because E_{P(s_t,s_{t+1},a_{t+1},\bar{s_{t+1}}|\bar{s_t})}Q_{\beta}(s_t,a_t))=E_{P(s_t|\bar{s_t})}[Q_{\beta}(s_t,a_t)]$)
This linearly-approximated Bellman backup (Equation 4.) should hold for all states, so our final objective can be expressed as the following:
$E_{\bar{s_t} \sim \rho(\bar{s})} [E_{P(s_t,s_{t+1},a_{t+1},\bar{s_{t+1}}|\bar{s_t})}[Q_{\beta}(s_t,a_t)-\bar{R}(\bar{s_t},a_t)-\gamma Q_{\beta}(s_{t+1},a_{t+1})]]^2. \qquad (5)$
We utilize replay memory to compute Equation (5). The replay memory contains tuples, where each tuple represents a transition. The transition includes the agent's action $a_t$, the subsequent states $\bar{s_{t+1}}, s_{t+1}$, and the reward $R(\bar{s}_t,a_t)=r_t$ when
$\bar{s_t}$ is given.
Therefore, we approximate Equation (5) as follows using the replay memory $D$:
$E_{(\bar{s_{t+1}},s_t,r_t,\bar{s_{t+1}},a_{t+1},s_{t+1},\bar{s_t})\sim D}[Q_{\beta}(s_t,a_t)-\bar{R}(\bar{s_t},a_t)-\gamma Q_{\beta}(s_{t+1},a_{t+1})]^2. \qquad (6)$
(_Please note that the beta function;_ $\beta(s,a)=Q_{\beta}(s,a)$ _differs from the conventional Q-value in RL literature, yet the expectation of the beta Q-value is the conventional Q-value. However, given the similarity of the beta function update equation to the regular Q-learning formula, we have specifically chosen to name the beta function as_ $Q_{\beta}(s,a)$).
**step 4. Continuous domain**
Lastly, for the application of Equation (6) in continuous domains, we have parameterized the function $Q_{\beta}(s,a) \to Q_{\beta, \theta}(s,a)$ using a neural network's weights $\theta$. If we use soft Bellman backup instead of Bellman backup, the Equation (24) in the manuscript is established.
This aligns with our intuition: replacing the last observed state and the previous $d$ actions (forming an augmented state) with the state $d$ steps ahead (the result of the previous actions and the last observed state).
Please note that Equation (6) looks similar to regular Q-learning, but it has distinct differences. Because, in the delay setting, ignoring the delay can lead to a violation of the Markovian assumption in the one-step transition; $P(s_{t+1}|s_t,a_t) \neq P(s_{t+1}|s_{t},a_{t},a_{t-1}, ...)$. Also, the action $a_t$ is chosen from $\bar{\pi}(\cdot|\bar{s_t})$, not $\bar{\pi}(\cdot|s_t)$. | Summary: This paper aims to tackle the delayed feedback RL problem. In such a problem setting, usually there is some fixed/constant delay in the environment, so that the observed rewards are delayed by $d$ timesteps. Traditionally, augmented state spaces have been used to solve such an issue, however, for large delays this leads to an exponentially large dimension for learning the value function. Thus, the proposed approach (BPQL) leverages the deconstruction of the augmented value function via the belief matrix into a lower dimensional beta q value function, and shows that this approximates the augmented q function (derived from the soft bellman update). This outperforms baselines on a variety of control tasks.
Strengths: - I think there is good technical novelty and soundness in this paper
- The problem space is very interesting and relevant to real world applications
- BPQL drastically outperforms basleines
- This approach is general (as it applies to continuous settings)
Weaknesses: I think it would be interesting to explore this idea on different types of environment delay (for example delays coming from different sources like actuators, sensors, etc. It would also be good to see more analysis and ablations on different levels of delays (value of $d$).
Finally, it would be great if this apporach applied to more realistic robotics settings (even simulated versions).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See weaknesses section
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: This has been sufficiently addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear HRKg,
We sincerely appreciate your positive review and your interest in the problem space our paper aims to address. Based on your suggestions, we have conducted additional tests and provide the details as follows:
**Different types of delay & Ablation**
>“I think it would be interesting to explore this idea on different types of environment delay (for example delays coming from different sources like actuators, sensors, etc. It would also be good to see more analysis and ablations on different levels of delays (value of d).”
In the field of robotics, delays occur in various scenarios such as remote robot control, communication in low bandwidth, and complex control algorithm with on-board devices (sensors, actuators) with low computational capacity. Environments with these delays can be categorized into the following three types:
* Observation delay environment
* Action delay environment
* Combined (Observation + Action) delay environment
In the main text, BPQL mainly conducted experiments in environments where observation delay only exists. However, BPQL is a general algorithm that can be applied not only to observation delay scenario but also to action delay scenario and even the combined delay scenario. To validate the generality of BPQL, we conducted an additional experiment evaluating performance under various delay combinations, and the results are listed in the table below.
||HalfCheetah-v3|Walker2d-v3|Hopper-v3|
|-|-|-|-|
|Obs. delay=5 / Act. delay=0|6371.4+-181.9|4363.3+-332.0|3144.1+-528.17|
|Obs. delay=0 / Act. delay=5|6459.1+-204.8|4008.2+-663.8|2736.8+-931.6|
|Obs. delay=3 / Act. delay=2|6332.4+-210.4|4399.7+-455.5|2802.8+-670.8|
|||||
From the results, we confirmed that BPQL shows promise not only in single delay environments but also in combined delay environments. This generality of BPQL allows for broader application in addressing the significant latency issue in the field of robot learning.
**Additional experiments on a realistic robotics setting**
>"it would be great if this approach applied to more realistic robotics settings (even simulated versions)."
To further demonstrate the practicality of BPQL, we conducted an additional experiment in a more industry-oriented simulation task called Pusher-v4 from OpenAI Gymnasium. Pusher-v4 is a collaborative arm robot with multiple joints, resembling a human arm. The objective of the task is to move a target object to a specific goal position using the robot's end effector. In this experiment, we added sensing delays of 200 ms and 500 ms in the environment, and the results are presented in the table below.
||delay=200 ms|delay=500 ms|
|-|-|-|
|Baseline (SAC w/o delay)|-21.0+-0.6|-21.0+-0.6|
|SAC|-88.5+-10.9|-166.0+-3.9|
|BPQL|-22.7+-0.7|-24.8+-1.5|
||||
The results show that, even with the presence of delay, BPQL demonstrates comparable performance to the delay-free environment (baseline SAC), reaffirming its potential. | Summary: This paper addresses the issue of delayed feedback in reinforcement learning,
where the observations and rewards received by the agent are those generated by
the environment multiple (`d`) time steps ago, most commonly caused by latency
in hardware. This problem is typically solved by augmenting the policy and
value functions with the `d` previous actions, recovering the Markov property
of the problem. This paper argues that this augmented 'state space' is
prohibitively large and proposes Belief-Projection-Based Q-learning (BPQL) as a
fix to the issue.
BPQL modifies SAC such that the value function (critic) that is being learned
only takes the observation (and not the `d` actions) as input (though the
policy (actor) still depends on the augmented space). The result is an
approximation of the original problem, where the critic learns the expected
value of the policy based on just the last observation, as opposed to the
observation and last `d` actions.
This approach is motivated by a substantial theoretical section that derives
and discusses the (approximated) loss function. The method is compared against
augmented SAC, a model-based approach, and regular SAC on MuJoCo for different
delays.
Strengths: This paper proposes a high-performance solution to the complex problem of
delayed feedback. It does so in a fairly simple (in the positive sense) way
that should be easily implementable and testable.
Another advantage is the theoretical support of what otherwise would be a
fairly ad-hoc solution. This is particularly useful because, in my opinion, the
intuition of BPQL is fairly nice: the critic can ignore the additional action
history input because this depends only on the policy it is evaluating: the
action sequence of the policy is relatively 'stable', or determined, so not
particularly variable or something that needs to be given to the critic in
order to evaluate the expected return.
======== post rebuttal ==========
The authors brought up reasonable arguments, context, and experiments to defend their work. I would personally still learn towards rejection due to the fact that, in its current form, I believe that the combination of the presentation, algorithmic contribution, and empirical evaluation are in my opinion relatively weak. However, I do not feel strong enough about this to champion for a rejection when others disagree.
Weaknesses: Despite these strengths, there are a few concerns that makes me lean towards
rejection.
While the experiments show otherwise, I am slightly surprised that the action
space is big enough to cause issues in terms of policy evaluation and would
need more data to be truly convinced. Clearly augmented SAC is suffering but,
unless there is something particular about these problems (of which I have no
detailed knowledge), generally the observation/state space is so much larger
than the action space that the additional input of `d` actions should not be
that significant.
Regarding clarity, the presentation of the paper was in my opinion confusing:
First, the method section contained a substantial amount of information that
would fit better in the background (e.g. 3.1, 3.2, and the start of 4). Second,
there was a very limited amount of information on the idea behind the paper or
its overview until you start digging into the details. For example, until the
last paragraph before the experiment section I had no idea where this was
going. Additionally, the theoretical section seems to attempt to justify the
approximations made by BPQL, but that was not clear at all until diving into
the section afterwards.
Most importantly, i still struggle to understand the theoretical support. I
believe it eventually it provides a theoretical explanation for BPQL's loss
function, but I only truly understand the loss function from a intuitive
perspective and had to gloss over section 3 altogether. The main reason I lean
towards rejection is because it was not presented in a way that the theoretical
section was a contribution to me. This mostly because it is fairly 'easy' to
propose to 'just drop' some input to reduce the space and have it work well for
some problems. So the theory is important here but, in my perspective,
difficult to understand.
Lastly, some ablations that would be very informative are missing. For example,
it is clearly important that the policy accepts the augmented space (because
that seems to be the key difference between BPQL and 'normal SAC'), but the
obvious alternative is to do some variant where the policy takes in only the
last observation but the critic gets the augmented input. I think more in depth
experiments would further motivate the approach. In particular, I would want to
see why it is so important that the policy does accept the last `d` actions as
input: is this a property of the problems, or is there some theory or intuition
that explains this discrepancy?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Title: Response to emergency reviewer 3qfv
Comment: Dear 3qfv,
We want to sincerely thank you for the effort you put into providing valuable comments, and provide additional details as follows:
**Impact of additional input of actions**
>"Clearly augmented SAC is suffering but, unless there is something particular about these problems (of which I have no detailed knowledge), ... the additional input of d actions should not be that significant."
As you mentioned, the dimensions of the original state take up a substantial portion in the augmented state. For instance, in the Walker2d-v3 environment with a delay of 9, the augmented state is made up of 17 dimensions from the original state and 54 dimensions from the additional actions. (Note that even in this particular case, the ratio of dimensions influenced by the added actions is already **six times larger!**)
However, we think that these two factors have different effects on increasing the size of the state space. Because, in the field of robot learning that we are focusing, components such as velocity and angular velocity which make up a robot's state are influenced by the actual environment and actuators. Consequently, each component's size is intrinsically limited. This is similar (though not identical) to the context of the Manifold Hypothesis in deep learning literature: most high-dimensional data lies on low-dimensional manifolds.
However, added actions are the problem. Because, the additional actions can be completely random, unlike the state space. For instance, in reinforcement learning, a policy tends to select actions randomly in the beginning to encourage exploration. This randomness becomes a factor that dramatically increases the space of the augmented state.
To support this assumption, we conducted an experiment to investigat the difference in the convergence of the Augmented Q-function when using **1) a fixed untrained policy** and **2) a fixed trained policy**. We compared the TD-error of the Q function when applying both policies, and the results are as follows.
**Walker2d-v3 with delay 9**
|steps|50k|100k|150k|200k|250k|
|-|-|-|-|-|-|
|TD-error of untrained policy|551.7|422.7|194.9|110.4|92.0|
|TD-error of trained policy|57.0|57.5|56.4|56.3|57.7|
The results showed that the TD-error converges rapidly in the trained policy, while the untrained policy shows a slower convergence speed. This demonstrates how the randomness of initial policy significantly affects Q-value convergence, thereby confirming that the additional input of actions has a significant impact on augmented Q-function convergence.
**Contribution**
>"Most importantly, i still struggle to understand the theoretical support ... the main reason I lean towards rejection is because it was not presented in a way that the theoretical section was a contribution to me. This mostly because it is fairly 'easy' to propose to 'just drop' some input to reduce the space and have it work well for some problems. So the theory is important here but, in my perspective, difficult to understand."
As you mentioned, it is easy to intuitively grasp how our proposed algorithm works effectively from a practical perspective. However, through Section 3 (the theory section), we aimed to demonstrate why we chose belief-projection for reduction technique, and why using belief-projection to train the critic leads to stable convergence even though the beta Q-values approximate the true Q-values. We believe that explaining the underlying reasons behind the successful performance of an algorithm will provide **valuable guidance** to machine learning researchers interested in this field for their future studies.
**Why we train the policy using augmented states.**
>"In particular, I would want to see why it is so important that the policy does accept the last d actions as input: is this a property of the problems, or is there some theory or intuition that explains this discrepancy?"
The idea of feeding a reduced sized state into the policy rather than an augmented sized state can be naturally considered. However, we respectfully state that, unfortunately, this is not feasible in BPOL. In an environment delayed by $d$ timesteps, the agent can only see the delayed observation $s_{t-d}$ at time $t$. In other words, unlike the learning stage of the critic, there is no way to know the $s_t$ corresponding to the augmented state $(s_{t-d}, a_{t-d+1}, ..., a_{t-1})$. Therefore, in BPQL, the policy uses the augmented state that contains the **most information at that point to make decisions.** Note that depending only on the last observed state wouldn't be enough to make the right decision. This is because a series of actions significant impact on the current true state. Consider a game of chess, for example. The current true chess board state is heavily influenced by previous actions, making it challenging for the agent to decide a right action based solely on the last observed state. Hence, the policy should be trained with the augmented state.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed rebuttal
Comment: Thanks for the clear response. it has helped me understand the broader picture and will update my review accordingly.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer's effort in reviewing our manuscript, providing constructive feedback, and raising his/her score!
Title: RE: Thank you for the detailed rebuttal | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a new belief projection-based method for approximating delayed states in reinforcement learning. The method is derived from basic principles with properties of the projection demonstrated. The method is then transformed into a practical, model-free approach, that is extensively evaluated and bench-marked, with promising results.
Strengths: - The paper is easy to follow, the derivations are cleanly done and well explained.
- The idea is interesting and sound
- I really liked the logical progress of deriving the method with the assumption of having access to all the probabilities etc. and then step by step turning it into a practical method
- The experiments show a good number of baselines, and sufficiently many standard benchmarks. The ablation with the delay steps, network capacity, and stochasticity are well done. The number of repetitions is adequate.
- The method shows clear experimental benefits
Weaknesses: - I found Sect 2.1 and 2.2 a tiny bit confusing in terms of notation initially. Especially in Sect 2.1 it was initially not clear why X is used and initially it wasn't clear to me that the reward is also delayed (Sect 2.2). Just some minor writing tweaks
- Sect. 2.2: in conventional control having a system-model-based observer (e.g. Kalman filter) is also quite common - not just the augmented state - similar to the methods described a bit further down
- In Sect. 1 it almost sounds like the method also considers (or should consider) varying delays for e.g. hardware issues. I'd suggest making more clear from the start that you consider constant delays.
- I would have appreciated some little outline/forward pointer. On the first read I was a bit puzzled by the apparent assumption e.g. in Eq (7) of having access to the full model while initially a model-free method was promised. Briefly explaining the structure of the paper in the beginning would avoid that confusion.
- missing limitations (see below)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - See small writing suggestions above
- The belief projection method comes a bit out of the blue, can you better motivate why this is a good idea?
- The setting is connected to POMDPs where there is quite a bit of literature but which is only mentioned in passing. Can you discuss the relation a bit better?
- Please detail the limitations better, you mention random delayed environments which are an obvious limitation.
* Do you think the method would still perform reasonably if there is some small randomness in the delay (even if that's not backed by the theory)?
* What are the implications of the linear approximation?
* More generally, this seems to be a lossy compression, bringing everything back down to the original state-space size. In which cases will having the full augmented state (and infinite data etc.) still result in better performance?
* Can you do something in between, i.e., project to a slightly larger state-space?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: - societal impact is not really applicable
- an explicit discussion on limitations is a bit short (see above)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Title: Response to emergency reviewer MGmU
Comment: Dear reviewer MGmU,
We sincerely appreciate the reviewer for providing valuable feedback and showing interests in our manuscript. We have carefully considered each of your comments, and provide additional details as follows:
**Motivation**
>"The belief projection method comes a bit out of the blue, can you better motivate why this is a good idea?"
Many machine learning references have shown that dimensionality reduction methods, such as variational autoencoders (VAE), contribute to efficiently training neural networks. To leverage these advantages in the RL framework, we concluded to use an effective reduction method for the augmented state space. As we discussed in Lines 134-137 of the main text, if two successive transition probabilities, $P(s_t|\bar{s_t^1})$ and $P(s_t|\bar{s_t^2})$, are similar, we can assume that the augmented states $s_t^1$ and $s_t^2$ share similar representative meanings. In other words, if two augmented states (created by a sequence of actions following the last observed state) and their corresponding current _true_ states are similar, we assumed that these two augmented states would possess similar representative meanings. This insight guided us to apply the belief projection as the dimensionality reduction method.
**Delayed enviornment and POMDPs**
>"The setting is connected to POMDPs where there is quite a bit of literature but which is only mentioned in passing. Can you discuss the relation a bit better?"
Environments with delays are equivalent to partially observable MDP (POMDP) environments. If we train an agent in this POMDP while ignoring the delays, the agent will end up with a _suboptimal_ policy. Therefore, for the agent to obtain an optimal policy, using the augmented state method to simplify the POMDP into an MDP, or using an explicit model for state estimation, is often practiced.
**Questions**
>"Do you think the method would still perform reasonably if there is some small randomness in the delay?"
We conducted an additional experiment to investigate whether our proposed algorithm shows good performance even in an environment that includes small randomness. The experiment was conducted in the HalfCheetah-v3 environment, and the agent was trained in an environment with a constant delay of 3. The agent is evaluated in the following randomly delayed environment: The agent observes a 3 timesteps delayed state with a probability of 0.95, and the agent observes a randomly delayed state within the range [1, 5] with a probability of 0.05. The results are listed in the table below.
||baseline (constant delayed env.)|randomly delayed env.|
|-|-|-|
|Performance|8743+-103|7119+-628|
||||
BPQL shows its **robust** performance in the presence of small randomness. However, there was small performance deterioration in the random delay environment.
>"What are the implications of the linear approximation?"
The implication of the linear approximation is that the augmented state-based Q function is represented as a linear combination of the original state-based Q function in lower dimensions, such that it has the closest weighted Euclidean distance. In other words, we used belief projection to express this augmented state-based Q function as an expected value of the beta-Q function.
>"More generally, this seems to be a lossy compression ... In which cases will having the full augmented state (and infinite data etc.) still result in better performance?"
Yes, our proposed algorithm BPQL uses a linear approximation method, which inevitably produces some residuals (Eq.11 and 12). In other words, as you mentioned, this is a lossy compression. So, in a simple environment (where the augmented state space isn't too large), the augmented approach might perform better. Because, this method does theoretically "error-free" temporal difference (TD) learning.
For instance, let's consider the InvertedPendulum-v2 environment, where observations have a dimension of 4 and actions are of dimension 1. If the delayed timesteps is 6, the total dimension of the augmented state becomes only 4 + 1 * 6 = 10. In such a relatively simple environment, the Augmented SAC method demonstrates more stable convergence of performance as depicted in Figure E in Appendix section.
>"Can you do something in between, i.e., project to a slightly larger state-space?"
In the propoesed algorithm BPQL, this is not possible. Because BPQL approximates the augmented state in the _original_ state space, minimizing the weighted Euclidean norm between the approximated values and the actual values. However, we find this to be a very interesting question. In an extremely delayed environment, it may be hard to ignore the information loss arising from the linear approximation of belief projection. In such situations, using other **non-linear dimensionality reduction methods** such as VAE could be advantageous, **which can project to a slightly larger state-space.**
---
Rebuttal Comment 1.1:
Title: thanks for the rebuttal
Comment: Thanks a lot for the detailed replies and additional experiment. This helped me significantly to understand your method better.
My comment on POMDPs might have not been entirely clear: the connection in terms of setting is obvious, what I was looking for is relations to methods for solving (more general) POMDPs.
---
Reply to Comment 1.1.1:
Comment: We are very delighted to see that your understanding of the manuscript has been clarified through the response!
For the POMDPs part, we list several methods to address more general POMDPs as follows:
**State Estimation Methods**
To begin with, we have the Kalman Filter-based approach. For a linear stochastic state space model, the so called linear quadratic Gaussian(LQG) control theory has been well established, which optimizes the expected value of a given quadratic cost function. Remarkably, such a theoretically well-understood LQG control is separated into a deterministic linear quadratic(LQ) control and the Kalman filter. Much more generalized from the basic problem setting of LQG control, this paper considers nonlinear and model-free systems, realistic nonquadratic reward functions, and intractable time-delay. These issues are almost impossible to handle in classical model-based explicit approaches such as the above LQG control theory. Furthermore, the aforementioned separation principle does not hold. It means that Kalman filter is not optimal. This paper proposes a practical approach for controlling nonlinear systems with time-delay and more general non-quadratic reward functions in a data-driven manner, without requiring any prior model information. Specifically, the paper addresses efficient state exploration by compactly reducing the augmented state space size, as well as value-function representation for practical implementation.
In addition, there are attempts to estimate states by learning the dynamics of the model in various ways. [Agarwal and Aggarwal] construct the dynamic of transitions based on count-based model. Furthermore, [Derman et al] utilize neural networks to learn the dynamic model of the environment. Moreover, an RNN-based architecture can also be used to build dynamic models [Matthew and Stone]. However, these approaches, though intuitive, tend to accumulate errors when predicting states that are far from the current timestep, consequently impeding agent learning. In this paper, we address this issue of model errors using a model-free approach rather than a model-based approach.
**Complete Information Methods**
In this method, states with incomplete information are transformed into states with complete information. It is one of the widely used approaches to solving POMDPs. [Mnih, Volodymyr, et al.] constructed a complete state by stacking past states, and [Simon Ramstedt and Christopher Pal] created states with complete information by adding a single action. The augmented approach discussed in this manuscript can also be categorized under the complete information methods. However, a limitation of this method is the exponential growth of the state space, which impedes the convergence of the value function.
**Relevance to the existing POMDP approaches**
This paper deals with a POMDP problem through an an approximated MDP approach, which basically serves to provide benefits of the proposed scheme. The proposed algorithm, BPQL makes an attempt to efficiently use the second method above for POMDP to MDP conversion. The explosive nature of increasing data dimensions in the second method is strategically addressed in BPQL by approximating the augmented state space to a lower dimension. As the delay size increases, the existing methods suffer from the curse of dimensionality. However, this is not the case with BPQL. The state dimension size of BPQL is constant without respect to the delay size, which is an unprecedented advantage. Since the POMDP method itself is not a main issue, its description was not detailed. Nevertheless, we agree that briefly summarizing the literature related to POMDPs would enhance broader reader comprehension. Therefore, we will include a more comprehensive discussion about POMDPs in our revised version of the paper. We sincerely appreciate your constructive suggestion!
**Reference**
[1] Agarwal, Mridul, and Vaneet Aggarwal. “Blind decision making: Reinforcement learning with delayed observations.” Pattern Recognition Letters 150 (2021): 176-182.
[2]Derman, Esther, Gal Dalal, and Shie Mannor. “Acting in delayed environments with non-stationary markov policies.” arXiv preprint arXiv:2101.11992 (2021).
[3]Hausknecht, Matthew, and Peter Stone. “Deep recurrent q-learning for partially observable mdps.” 2015 aaai fall symposium series. 2015.
[4]Mnih, Volodymyr, et al. “Human-level control through deep reinforcement learning.” nature 518.7540 (2015): 529-533.
[5]Ramstedt, Simon, and Chris Pal. “Real-time reinforcement learning.” Advances in neural information processing systems 32 (2019).
Title: Response to emergency reviewer MGmU (2) | null | null | null | null | null | null |
Unsupervised Semantic Correspondence Using Stable Diffusion | Accept (poster) | Summary: The authors propose an approach to establish semantic correspondences, ie correspondences between different instances of the same object, using a pre-trained stable diffusion models, and without any task-specific finetuning. In particular, they leverage the fact that intermediate attention layers of unet diffusion model respond to the semantic of the text prompt. For a source image and a particular query pixel location for which a match should be computed, they optimize the prompt embedding that leads to a cross-attention map with a high activation at the query location. Once the prompt is recovered, they can apply the reverse attention process on a target image conditioned on the optimized prompt. The intermediate cross-attention layers of the diffusion model highlight the matching location. They also introduce technical ‘tricks’ to make it work, for example averaging over multiple crops of an image or starting from different random embeddings. This process must be repeated for each query pixel location.
Strengths: - The insight that stable diffusion models inherently learn semantic correspondences without task-specific training is interesting. How to extract and use such knowledge has not been explored in the past.
- The results are convincing - the approach obtains near state-of-the-art in semantic matching without any task-specific knowledge or funetuning.
- The paper reads well overall
Weaknesses: 1) Missing comparisons and citations in results: PWarpC [Truong et al. CVPR 2022]:
- the paper only includes comparisons to two weakly-supervised semantic matching approaches, with relatively weak results, which allows the authors to claim that their approach is 20% better than the state-of-the-art in weakly-supervised semantic matching.
- The authors did not include results of PWarpC, a weakly-supervised approach for semantic matching presented at CVPR 2022. The results of PWarpC significantly outperform the baseline results of this paper for SPair-71K (38.0 on SPair-71K for PWarpC-NC-Net), making the above claim invalid.
- on PF-Willow: to double check since the two papers use different metrics but PWarpC (weakly-supervised) outperforms CATs, which obtains better metrics than the proposed approach. It might therefore outperform the proposed approach.
- please include this comparisons and revise the claims/analysis accordingly
2) while the paper presents an interesting analysis, the proposed approach is currently very impractical and almost unusable, since as the authors say, it takes 30s to generate a single match. I believe it would be interesting to have even a small experiment training a semantic matching model based on pseudo ground-truth matches obtained from the diffusion model.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: In addition to the weakness above:
A) What is the resolution of the extracted feature/attention maps and at what resolution is the matching done? for example are the features interpolated to higer resolution before computing the cross-attention weights?
Is a similar resolution used in competitor works like DINO?
This is important because the resolution of the feature maps is of crucial importance for obtaining fine-grained/accurate correspondences. It has been the limiting factor of many previous approaches that needed to compute all-to-all correspondences, because of memory constraint. Just using a higher resolution feature map for the cross-attention would lead to an improvement in results, not necessarily related to the features themselves.
B) L.230 the authors mention it takes ‘30s’ to optimize the prompt, does it include only finding the prompt? or finding the match? A real run-time analysis of time versus performance for different settings would be interesting here (depending on how many crops are used, how many initializations ect).
C) clarification needed:
C.1) L.178: M′ has dimension C×(h×w)×P, where is the channel coming from? According to equation, M’ is computed with the standard attention mechanism, which is just a matrix multiplication followed by a softmax. As a result, the dimensions should be (hxw)xP.
C.2) L.185, 187, it would be helpful to specify the shape of M′′ and M, to better follow the different steps.
What is the dimension of P for the learnt embedding?
C.3) L213, is it equation 5 or equation 7? The authors say they average the optimization over multiple crops and over multiple initialization of the embeddings. Is this happening at the same time, i.e. for different initialization, the optimization is done over multiple crops? If so, then it should be equation (7).
C.4) in Tab1, what is meant by ‘weakly-supervised (test-time optimization)’? DINO with NN is for me unsupervised since it only requires single images for the contrastive learning part.
some typos:
L.207 hyperparemter
L.53-54 Note that while finding actual prompt thatcorrespond to words is a discrete problem, hence difficult to solve. -> remove while
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: they have adequately addressed the limitations an negative impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and for recognizing that our approach provides new insights for the estimation of semantic correspondences with stable diffusion. We are glad that the reviewer finds our results convincing. We address the reviewer's concerns below:
### **W-1: Missing comparison with PWarpC**
Thank you for pointing out this baseline. Below is the updated Table with PWarpC. As there are many variants, we have chosen the one that performs the best. Here we provide the updated table. The complete updated table can be found in **Table 1** in the included rebuttal PDF above.
| | CUB-200 | | PF-Willow | | SPair-71k | |
| --- | --- | --- | --- | --- | --- | --- |
| | PCK@0.05 | PCK@0.10 | PCK@0.05 | PCK@0.10 | PCK@0.05 | PCK@0.10 |
| Strong supervision | | | | | | |
| PWarpC-NC-Net* res101 [1] | -- | -- | 48 | 76.2 | 21.5 | 37.1 |
| Weak supervision | | | | | | |
| PWarpC-NC-Net res101 [1] | -- | -- | 45 | 75.9 | 18.2 | 35.3 |
| Unsupervised | | | | | | |
| DINO+NN [2] | 52.8 | 68.3 | 40.1 | 60.1 | -- | 33.3 |
| Our method | 61.6 | 77.5 | 53 | 84.3 | 28.9 | 45.4 |
[1] Prune Truong, Martin Danelljan, Fisher Yu, and Luc Van Gool. Probabilistic warp consistency
for weakly-supervised semantic correspondences. In Conference on Computer Vision and
Pattern Recognition, 2022.
[2] Shir Amir, Yossi Gandelsman, Shai Bagon, and Tali Dekel. Deep vit features as dense visual
descriptors. arXiv Preprint, 2021.
### **W-2 and Q-B: Practicality / Amount of time it takes to optimize the prompt**
It takes 30 seconds to optimize a text embedding; however, inference time is much quicker since it only needs to be run once per crop. For a single crop, it takes around 0.16 seconds, so for the 20 crops we used in our experiments, it takes 3.2 seconds. Inference time can be sped up by reducing the number of crops and Figure 9C of our paper shows the corresponding effect on final performance. This can be helpful for the case where a single optimized keypoint is applied to many images.
We would like to further note that while it takes 30 seconds, there is no network training involved. Other baselines, such as the weakly supervised ones on the table would require training. As an example, the best-performing weakly supervised baseline, ASIC, requires training for around 3 hours per dataset.
Regarding supervising another model with output from our method as pseudo ground truth, due to the limited size of the training dataset, any form of supervised training is shown to have an overfitting effect.
As future research direction for a more practical solution, we will explore optimizing text embeddings for multiple query points jointly that will significantly speed up our approach (also suggested by *reviewer Vgvc*), or methods such as those suggested recently in the textual inversion literature using hypernetworks [3, 4]. We would like to also note that, while early methods have been slow, such as the original NeRF and AlexNet (at the time), follow-ups have significantly sped them up by orders of magnitude, for example as in Instant NGP and MobileNet.
[3]. Ruiz, Nataniel, et al. "HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models." *arXiv preprint arXiv:2307.06949* (2023).
[4]. Arar, Moab, et al. "Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models." *arXiv preprint arXiv:2307.06925* (2023).
### **Q-A: Resolution of the features being used**
We only upsample the attention maps after the cross-attention layer to stay in line with the original stable diffusion model. Cross-attention layers with sizes 16x16 and 32x32 are upsampled to the original image size and then combined.
### **Q-C: M′ has dimension C×(h×w)×P, where is the channel coming from**
Thank you for pointing this out. We will include the fact in the paper that the cross attention is applied separately over the C channels, leading to the dimension of M′ being C×(h×w)×P.
### **Q-C.2: shape of M′′ and M and dimension of P**
Thank you for pointing this out. M'' is of shape [(H*W), P]. M is [H, W] and the dimension of P is 77. We will revise to make this clear.
### **Q-C.3: Is this happening at the same time, i.e. for different initialization, the optimization is done over multiple crops? If so, then it should be equation (7).**
Thank you for pointing out this error. We will correct it. The reviewer is indeed correct that line 213 should be referring to equation (7), not equation (5). This then accurately reflects how the embeddings used at this stage are optimized with different crops.
### **Q-C.4: DINO with NN is for me unsupervised since it only requires single images for the contrastive learning part.**
Thank you for pointing this out. We will update the table by putting DINO+NN in an unsupervised row. The provided table also reflects this change.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking into account our rebuttal and raising the score.
---
Rebuttal 2:
Comment: The authors have addressed most of my concerns and those of other reviewers. Despite the significant runtime problem, i think the paper proposes an interesting contributions so will upgrade my score to 6. | Summary: This paper proposes a model that finds semantic correspondence between a pair of images, with a pretrained diffusion model. By optimizing the prompt embeddings, such correspondence can be read out via attention maps in the UNet. Empirical study shows the good performance of this proposed model over exising baselines.
Strengths: 1. The proposed method has a simple form but is effective.
2. The presentation is clear and easy to follow.
2. The qualitative results are impressive, especially the results between different classes.
3. Some quantitative results are significantly better than baselines, or comparable to supervised results.
Weaknesses: 1. More information should be provided for the optimization set up and implementation, e.g. the optimization algorithm for Eq (5) and Eq (9).
2. The complexity and efficiency are needed for better assessment of the algorithm, especially in comparison to baselines.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I'm wondering how would the trained prompt compare to text embeddings, e.g. how would a prompt optimized from a bird's eye compare to the text embedding of "a bird's left eye"?
2. One textual baseline that doesn't appear in the experiments is the attention map of an actual text prompt, i.e. would the prompt of "a bird's left eye" have its highest attention to the eye's location?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors has addressed valid limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for constructive feedback. We appreciate that the reviewer finds our approach effective and our qualitative results across classes impressive. We address the reviewer's concerns below:
### **W-1: More information about optimization setup and implementation**
We provide detailed information about the implementation in the Supplementary, as it is difficult to fit into the main paper.
We guarantee we will release the code upon acceptance of the manuscript to facilitate reproduction.
### **W-2: The complexity and efficiency are needed for a better assessment of the algorithm, especially in comparison to baselines.**
We only optimize the text embeddings, hence 59k parameters. No stable diffusion weights are touched. As reported in the submission (Section 5), this results in ~30 seconds per single prompt on a 3090 GPU. Note that while this is the case, compared to baselines, this could result in less compute time if evaluating only a few images -- this is because the weakly supervised baselines require training a deep network (which takes approximately 3 hours per dataset for the best-performing baseline, ASIC), which is not required by our method.
### **Q-1 and Q-2: How would a prompt optimized from a bird's eye compare to the text embedding of "a bird's left eye"? What does the attention map look like?**
To effectively visualize the optimized embeddings we look into the attention maps for the optimized embedding and the natural prompt.
In the provided rebuttal PDF above, **Figure 2** illustrates an initial image of a bird, while **Figure 3** shows the corresponding optimized attention map focused on the bird's eye. The optimized attention map notably aligns with the Gaussian distribution it was supervised to imitate.
Furthermore, **Figure 5** in the same PDF also visualizes the attention maps for each token corresponding to the sentence "a bird's left eye.” The bird’s eye gets highlighted as we would expect, but not as localized as the optimized embedding.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal. It has addressed most of my concerns and I'd raise the score to 7.
---
Rebuttal 2:
Comment: Thank you for taking into account our rebuttal and raising the score. | Summary: This paper focuses unsupervised semantic correspondence and proposes to leverage the semantic knowledge within fashionable text-to-image diffusion model to accomplish this task. The method optimizes a learnable text prompt to maximize source image attention value of query location. Then, the optimized text prompt is used to find corresponding location of target image by finding the max value of target image attention value. Related experiments show the effectiveness of the proposed method.
Strengths: 1.The idea of leveraging semantic knowledge of text-to-image diffusion model sounds interesting. This paper builds a new type of unsupervised semantic correspondence method based on text-to-image diffusion model.
2.The designed method is effective and achieve promising correspondence results on common datasets.
3.The design choices are reasonable, e.g., random crop for preventing overfitting when finding the text prompts.
Weaknesses: 1.The usage of text-to-image (t2i) diffusion model is not well-motivated. The proposed method mainly leverages the cross-modal ability of t2i diffusion model but underuses the generative ability of t2i diffusion model. The role of t2i diffusion model in proposed method can be replaced by other cross-modal pretraining model, e.g., CLIP, BLIP, GLIP. Please claim the unique significance for adopting t2i diffusion model rather than other cross-modal pretraining model.
2.The proposed method has to execute dozens of denoising steps for each query location. As diffusion model is notorious for its inference speed and cost, the total overhead is expensive. Are there any mitigation measures? For example, grouping the query locations or modifying the diffusion model architecture?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see “Weakness” part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors objectively discuss the limitations of the proposed method, e.g., the slow inference time caused by the introduction of diffusion model. This paper doesn’t have negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and for recognizing that our approach is effective and our results on unsupervised semantic correspondences are promising. We address the reviewer's comments below:
### **W-1: Missing motivation of using text-to-image (t2i) diffusion models and comparison with CLIP, BLIP, or GLIP networks.**
Our work is motivated by the observation that text-to-image diffusion models are capable of generating high-quality images conditioned on textual prompts, indicating that these models understand the semantics of the objects being generated. This is owing to the specialized architecture used in Stable Diffusion which produces attention maps using cross-attention between textual and image features.
It is indeed possible to leverage cross-modal models like BLIP within our proposed framework, although suboptimal. Taking into account the reviewer's suggestion, we use BLIP to get semantic correspondence between a pair of images by capturing the output from the cross-attention layers between the text embeddings and the image features. This is done similarly to our method across multiple layers in the model and aggregated. We get significantly worse performance on PF-Willow (PCK 13.1% for PCK @0.05 and 31.7% for PCK @0.1 ) in comparison to our method (PCK of 53.0% for PCK @0.05 and 84.3% for PCK @0.1). This underperformance may be attributable to factors such as the dataset size the model was trained on, the number of parameters in the model itself, or the model outputting text tokens as opposed to denoising the image. We thank the reviewer for this suggestion and will include these results in the revision.
### **W-2: Can grouping the query location reduce the computation overhead of SD inference?**
This is an interesting observation. This would indeed reduce the computation overhead significantly. We leave this research to be explored in the future.
---
Rebuttal Comment 1.1:
Comment: Having read the rebuttal and the other reviews I decide to keep my initial rating. I hope the authors would address the time cost carefully.
---
Rebuttal 2:
Comment: Thank you Reviewer Vgvc for your acknowledgement!
We will for sure address the timing concerns in our future research. Preliminarily, we have found that grouping can indeed enhance speed significantly -- a group of correspondences and a single correspondences require similar runtime.
Thanks,
Authors | Summary: This paper explores unsupervised semantic correspondence tasks with stable diffusion model. Specifically, the authors proposed to first optimize the prompt embeddings of stable diffusion model, to maximize attention on the region of interest, then the optimized prompts are used for semantic correspondence.
Experimental results on PF-Willow, CUB-200 and SPair-71k datasets show that proposed method significantly outperforms weakly / unsupervised methods.
Strengths: - This paper shows that one could explore diffusion models for semantic correspondence tasks.
- the writing is clear and easy to follow.
- The experiments show that the proposed method could even outperform supervised method on PF-Willow.
Weaknesses:
1. How important is the optimization for prompts? Could the authors show results if the prompts are a vanilla sentence, e.g. "an image of a cat"?
2. Concurrent works: there are multiple papers presenting correspondence ability of diffusion models:
[1*] Luo, Grace, et al. "Diffusion Hyperfeatures: Searching Through Time and Space for Semantic Correspondence." arXiv preprint arXiv:2305.14334 (2023).
[2*] Tang, Luming, et al. "Emergent Correspondence from Image Diffusion." arXiv preprint arXiv:2306.03881 (2023).
[3*] Zhang, Junyi, et al. "A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence." arXiv preprint arXiv:2305.15347 (2023).
Could the authors explain the differences if possible?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: please refer them to the weakness session
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and for recognizing that our approach achieves state-of-the-art results. We address the reviewer's concerns below:
### ***W-1: Importance of prompt-optimization for the correspondence task***
Optimization of the prompts is critical. Without it, one cannot truly “localize”. As an example, we are attaching the attention map corresponding to the prompt “An image of a cat”. For the token “cat” the entire body of both cats is attended to rather than a single cat or parts of the cat. See respectively **Figures 1 and 4** in the provided rebuttal PDF above.
### ***W-2: Comparison with papers submitted to Arxiv after Neurips deadline***
We note that the papers mentioned by the reviewer appeared on Arxiv after the NeurIPS submission deadline, and we believe that, per the NeurIPS guidelines, they should be counted as contemporary work and should not affect the evaluation of our paper.
Nonetheless, in the revision, we plan to mention them as they are quite relevant, and in fact, highlight the core difference that our method has compared to them -- they all look into how to use the deep features within the Stable Diffusion Network effectively, similarly to how VGG19 features are widely used for various tasks. Our method, on the other hand, looks into how we can alter the attention maps within Stable Diffusion to our will, in other words taking into account how these features are supposed to be used within Stable Diffusion. To do so we optimize embeddings. And by doing so we show that one can perform alternative tasks than simple image creation such as the semantic correspondence task we demonstrated. However, this is not the end of what our framework can do as *reviewer 5igN* pointed out. For example, an immediate straightforward extension of our method could be to learn an embedding for a part of the cat, e.g., paw, using multiple images and not just one. We note that our current work was a first investigation about whether this was at all possible, and that whether one solution is more general and useful than the other, is a question for future works.
In more detail, Diffusion Hyperfeatures [1*] consolidates multi-scale and multi-timestep feature maps from Stable Diffusion into per-pixel feature descriptors with a lightweight aggregation network. A Tale of Two Features [3*] introduces a fusion approach that capitalizes on the distinct properties of Stable Diffusion (SD) features and DINOv2 by extracting per-pixel features from each. Emergent Correspondence from Image Diffusion [2*] extracts per pixel features from Stable Diffusion.
[1*] Luo, Grace, et al. "Diffusion Hyperfeatures: Searching Through Time and Space for Semantic Correspondence." arXiv preprint arXiv:2305.14334 (2023).
[2*] Tang, Luming, et al. "Emergent Correspondence from Image Diffusion." arXiv preprint arXiv:2306.03881 (2023).
[3*] Zhang, Junyi, et al. "A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence." arXiv preprint arXiv:2305.15347 (2023). | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive feedback. We are glad that the reviewers acknowledge that our method is novel and provide new insights, with solid experimental results, and clear exposition. We provide detailed responses in the individual rebuttals and provide the figures and the tables in the attached PDF.
Pdf: /pdf/3543b27f6fc61453cf0d92075ca354fa976e24a8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes an approach for unsupervised semantic correspondence by employing a pre-trained Stable Diffusion generator. The basic idea is that since text-to-image diffusion models are capable of generating realistic images, they must understand the semantics of the objects they generate and thus be capable of finding semantic similarities in images.
Technically, the approach feeds a real image (or, better, its latent vector) into a diffusion pipeline, recover attention matrices between pixel values and the textual prompt, and optimizes the textual prompt embedding so that the cross-attention focuses on a query pixel location. Once optimization is over, the same text embedding can be employed to find semantic correspondences in a target image. Cross-attention matrices are averaged across channels and a subset of U-Net layers, using bilinear interpolation. Further, the target attention map is simulated with a Gaussian distribution centered on the query location, and a L2 loss is employed for optimization.
Authors also propose two regularization strategies to prevent over-fitting during optimization, i.e. averaging across crops of the query image, and averaging across multiple rounds of optimizations/initialization choices.
Experimental results are conducted over three standard benchmarks for the task, i.e. SPair-71k, PF-Willow and CUB-200. The proposed approach achieves superior results when compared to unsupervised or weakly-supervised approaches, and is also competitive when compared to models employing strong supervision.
Strengths: - The main idea of the paper (extracting/exploiting the semantics embedded in a pre-trained a SD network) is novel and interesting. It is in-line with some recent pre-prints working on related tasks (e.g. image segmentation), but to my knowledge the proposed approach is novel wrt published papers.
- The technique for extracting semantics is simple and effective, and also are the two regularization techniques.
- Experimental results are solid and confirm the appropriateness of the approach.
- The paper is well written, clear and a pleasure to read. The supplementary material is also interesting and comprehensive.
Weaknesses: - While the idea behind the paper is very general and could have been applied to a variety of tasks involving semantics (e.g. image segmentation, panoptic segmentation, classification), the paper only focuses on finding semantic correspondences. This limits the overall impact of the work to a single task of Computer Vision. I wonder if authors have tried to extend to other tasks, and how difficult that would be.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have successfully discussed the limitations of the work in the appropriate section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and for recognizing that our proposed approach is novel, effective and our experimental results are solid. We address the reviewer's comments below:
### ***Discussion on the extension of the approach to tasks such as semantic segmentation, classification etc.***
We thank the reviewer for suggesting an extension to this work. Indeed image segmentation tasks could possibly be accomplished by optimizing an embedding to attend to a region as opposed to a single point. We plan to continue exploring potential extensions -- as the reviewer pointed out, our framework is indeed very general after all, and can be seen as a direction forward for extracting learned knowledge from these large generative models.
---
Rebuttal Comment 1.1:
Comment: I acknowledge that I have read the response, and I intend to keep my original rating.
---
Rebuttal 2:
Comment: Thank you Reviewer 5igN for your acknowledgement!
We would also like to share that, as you guessed, preliminary results show that semantic segmentation is also something that is quite possible. We are also seeing positive signs for other applications as well. While we cannot be specific about them at the moment due to internal reasons, we wanted to share our enthusiasm with you.
Thanks,
Authors | null | null | null | null | null | null |
Dynamically Masked Discriminator for GANs | Accept (poster) | Summary: This paper presents a novel method for training Generative Adversarial Networks (GANs) based on online continual learning, addressing the persistent challenge of GAN training instability.
The method considers the time-varying distribution of generated samples and prompts the discriminator to learn new information swiftly. It accomplishes this by detecting when the discriminator is lagging in learning from newly generated data and applying dynamic masking to discriminator features, thus forcing faster adaptation to the changing data distribution.
Improves GAN training, but also outperforms existing SOTA and advanced diffusion models.
Strengths: 1. Excellent pilot study to understand the generated distribution.
2. Method well explained. Nice flow chart
3. Promising image results supported by FID.
4. Multiple datasets
Weaknesses: 1. The video in the supplementary is confusing.
2. Lack of intuition explanation or theory proof to the proposed “Dynamically masked discriminator”
3. Fig3, 5 are both feature maps for visualization. I don’t understand the feature map, asking for more explanations.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Does the proposed method introduce a significant amount of computation cost?
2. I can’t understand the feature maps used in the figure, asking for more explanation.
3 .How well does the proposed method generalize to other GAN methods?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Refer to previous sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the reviewer's positive feedback and insightful comments. We are delighted to learn that the reviewer agrees that " novel method", "excellent pilot study", "well explained", "nice flow chart", "promising image results", and "multiple datasets".
**Q1. Asking for more explanations on the feature maps in Figs. 3,5**
**A.** Feature maps in Fig. 3 are extracted from the layer of StyleGAN-V2’s discriminator trained at different training steps. These feature maps visualize the attentive regions of the discriminator, showing that StyleGAN-V2’s discriminator pays attention to almost fixed local regions over time, given the input image. That is, StyleGAN-V2’s discriminator almost depends on fixed local regions for discrimination. However, as the generator evolves during training, the discriminative local regions distinguishing between real and generated samples change over time.
Hence, it is expected that the discriminator pays attention to different local regions in the samples at different steps, to learn the time-varying generated distributions(see Fig. 2).
In Fig. 5, feature maps also show the attentive regions of the discriminator. For example, the feature map in the fifth column assigns high values to the right eye regions. By masking the feature map in the sixth column, our method breaks the original dependency of the discriminator on some local features that are important to distinguish historical samples and encourage the discriminator to learn new knowledge from incoming data.
**Q2: Intuition explanation or theory proof to the proposed “Dynamically masked discriminator**
**A.** Thank you for the comments. We agree with the reviewer that theory proof can make our work more comprehensive. We will explore it in future work, as it is beyond the scope of this paper.
In this paper, we mainly show the challenges posed by the time-varying distributions, reveal that
typical discriminators slow down their adaptation to the changes in the incoming data,
and propose a new method to address the challenges.
For the intuition explanation of our method, as discussed in the responses to the above questions, we aim to enable the discriminator to pay attention to different local regions for generated samples at different steps, such that the discriminator learns the time-varying distributions of generated samples.
To this end, when the discriminator is detected to be retarded, our method dynamically masks the discriminator features to reduce its dependence on some local features that are important to distinguish historical samples (see Fig. 5). As a result, the discriminator is enforced to re-build the dependency of incoming data and remaining non-masked local regions, and hence learn new knowledge from the incoming data.
**Q3: Does the proposed method introduce a significant amount of computation cost**
**A.** Thanks for the comments. Our method incurs a negligible increase in memory cost, since our method only additionally detects discriminator retardation and introduces feature masks.
For computational cost, our method does not affect the inference time, since our method is designed for the discriminator and only the generator is used during inference. For training, our method introduces moderate time cost; however, this is affordable (e.g., increasing GPU number to reduce training time), and the additional training time cost depends on the time interval of estimating the retardation. Our method additionally introduces 6 hours 11 minutes for training on AFHQ in our experiments, when our method is integrated with StyleGAN-V2 and retardation is detected at every 4 kimgs. When detecting retardation at every 20 kimgs, our method only additionally takes 1.3 hours for training.
**Q4:How well does the proposed method generalize to other GAN methods?**
**A.** Thanks for the valuable comments. Following your suggestions, we evaluate the generalizability of our method to other GAN methods by incorporating our method with StyleGAN-V3 and unconditional BigGAN, respectively. Compared with StyleGAN-V3 using the original discriminator (FID 5.850), our method improves the performance of StyleGAN-V3 by 15.88\% in the FID $\downarrow$ metric on AFHQ-Cat. Similarly, our method effectively improves the training of unconditional BigGAN, reducing the FID of unconditional BigGAN by 34.29\% on CIFAR10.
---
Rebuttal Comment 1.1:
Comment: Thanks for the comments. I read them on day one. I will keep my original rating.
Nice work!
---
Reply to Comment 1.1.1:
Title: Glad to Receive Your Response
Comment: Dear Reviewer dcqD,
Thank you for your positive feedback and insightful review comments, which helped us significantly improve the paper's quality. We appreciate your time and effort in reviewing our work.
Best,
The Authors. | Summary: The paper introduces a new method to enhance GAN training. Taking a continual learning perspective, the authors contend that the discriminator faces challenges in modeling the real/fake distribution due to the dynamic nature of the generated samples' distribution over time. Consequently, this slows down the learning process of the discriminator. The paper suggests that this slowdown occurs because the discriminator is trained using historical data. To address this issue, the authors propose a method to detect the sluggishness of the discriminator and dynamically mask the feature map to accelerate its learning.
The authors perform experiments on several benchmark datasets and demonstrate that the proposed method surpasses several existing GAN methods, as measured by FID and IS scores. Additionally, they provide an ablation study to examine the optimal choice of the layer to be masked and the corresponding masking ratio for achieving the best performance.
In general, the paper is well-written and easy to understand. However, it should be noted that the proposed improvement is incremental, as the concepts of continual learning perspective and dynamic discriminator training have previously been introduced in [74]. The main contribution of this paper lies in the use of masking to adjust discriminator training. It is important to address some claims made in the paper that currently lack empirical evidence, and it would be beneficial to provide further ablation study on retard detection.
Strengths: The proposed method introduces a technique to identify the slowdown and presents a novel way to mask feature maps to accelerate the discriminator learning.
The results are encouraging.
The paper is easy to follow.
Weaknesses: I find the claim that the retardation of the discriminator is attributed to learning on historical data somewhat unclear. The slowdown of the discriminator could also be a result of overfitting, causing the gradient to vanish and making learning difficult. It is necessary to provide further justification for the claim that historical data is the cause of the retardation. I suggest the authors include an experimental study to support their argument.
I would suggest the author consider including a comparison of the mean and variance between the proposed method and StyleGAN-2 in Figure 2. This would provide valuable insights into how the generator distribution changes with the proposed method. Additionally, it would be helpful to have more details about the figure. For instance, it would be beneficial to know the number of samples used to calculate the mean and variance. Could the author clarify whether the plot represents early epochs or late epochs? If the figure displays early epochs, it would be intriguing to observe how the changes evolve in the later epochs.
Similarly, I recommend that the author consider showcasing the model parameter difference in Figure 3 to demonstrate that the learning of the proposed method does not slow down. This would provide valuable evidence to support the claim. Additionally, it would be helpful if the author could clarify where the retard detection takes place during the training process when the proposed method is applied in this example.
The detection of retardation is a crucial aspect of the proposed method and warrants an ablation study to assess its impact. It would be valuable if the authors could provide an ablation study that demonstrates how frequently retardation is detected during the training process of the proposed method. Additionally, it would be interesting to explore the consequences of removing retardation detection and using random masking consistently throughout the training.
Overall, the paper presents several intriguing aspects. However, it is crucial to provide justifications and supporting evidence for the claims made. Without proper justification, there is a risk of potentially misleading the research community. I would consider the paper to be on the borderline, and I am looking forward to reading the authors' rebuttal addressing my questions and the feedback from other reviewers to make a final decision.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See questions and suggestions above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the positive comments and
constructive suggestions. The reviewer agrees that "a new method", "novel way to mask feature maps", "well-written and easy to understand", "the results are encouraging", and "surpasses several existing GAN methods".
**Q1: Experimental study to show that the retardation of the discriminator is attributed to learning on historical data.**
**A.** Thank you for your suggestions. We conduct experimental studies inspired by the study in online continual learning for rapid adaptation [6]. Given the discriminator trained from the beginning to time $T$, we evaluate its historical knowledge retention by evaluating its discrimination performance on historical data generated at time $T$-$t$, and evaluate its rapid adaptation by evaluating its performance in current data at time $T$ and future data at $T+t$.
The table below shows that the discriminator of StyleGAN-V2 achieves high accuracy on historical data, however, performs much worse on current and future data. This shows that the discriminator of StyleGAN-V2 retains historical knowledge (i.e., high accuracy in historical data), yet, does not fast learn and adapt to the changes in the distribution of future data.
**Table**. The accuracy of the discriminator on historical data and future data, where the discriminator is trained at 800 kimgs step, and $(\cdot)$ indicates the data at time $T$-$t$ or $T+t$.
| Method | Historical data (200) | Historical data (600) | Current data (800) | Future data (1000)|
|:-------:|:----------:|:----------:|:----------:|:----------:|
|StyleGAN-V2|0.9696| 0.9488|0.8236|0.604 |
|Ours (StyleGAN-V2 + DMD)|0.9129 |0.9229|0.8793|0.6543|
Instead, by detecting retardation and masking the discriminator, our method enforces the discriminator to reduce retained historical knowledge (see decreased accuracy on historical data), while effectively encouraging it to fast learn and adapt to new knowledge of future data (i.e., our method achieves higher accuracy on current and future data than StyleGAN-V2).
In addition, we calculate the average gradient of the StyleGAN-V2 during the training phase, showing the gradient does not vanish from 0 to 1200 kimgs.
**Table**: Average gradient of the StyleGAN-V2 during training
| kimgs | Gradient |
|:-----:|:------:|
| 0 | 3.5145e-08 |
| 200 | 4.6110e-07 |
| 400 | -9.4235e-07 |
| 600 | 5.6989e-07 |
| 800 | -4.3328e-07 |
| 1000 | 9.2649e-08 |
| 1200 | 1.2138e-06 |
**Q2: The number of samples used to calculate the mean and variance in Figure 2.**
**A.** We use 50k samples to calculate each mean and variance values.
**Q3: Whether the plot in Figure 2 represents early epochs or late epochs?**
**A.** The plot in Figure 2 represents the whole training process from early epochs to late ones on FFHQ.
**Q4: Model parameter difference of the proposed method and where the retard detection takes place.**
**A.** Thanks for your valuable suggestions. Our method detects the discriminator to be retarded at 1000 kimgs after 800 kimgs. Our method then applies the proposed dynamic discriminator adjustment to train StyleGAN-V2 from 1000 to 1004 kimgs, which increases the model difference of our method by 63.889\%, compared with the model difference between 800 to 1000 kimgs. Instead, the model difference of StyleGAN-V2's discriminator is decreased by 2.778\% without our method. These results show StyleGAN-V2's discriminator slows down learning and our method effectively encourages the discriminator to fast learn.
We will add our model's parameter difference in Fig. 3, as suggested.
**Q5: How frequently retardation is detected during the training process of the proposed method.**
**A.** The number of detected retardation is 256 times in total on AFHQ-Cat,when StyleGAN-V2 + DMD (ours) is trained for 1400 kimgs steps.
**Q6: Removing retardation detection and using random masking consistently throughout the training.**
**A.** Thank you for the suggestions. We build a baseline, namely Random-Mask Discriminator (RMD), which removes retardation detection from our method and uses random masking consistently throughout the training. We also build the second baseline, namely Time-Varying-mask Discriminator (TVD), which removes retardation detection and randomly changes the mask per mini-batch.
The table below shows that our method outperforms RMD and TVD by a large margin, thanks to the proposed retardation detection and dynamic discriminator adjustment modules.
**Table**: Comparison with the different masking strategies on AFHQ-Cat (256 $\times$ 256 pixels).
| Method | FID $\downarrow$ |
|:---------------:|:-----:|
| StyleGAN-V2 | 7.924 |
StyleGANV2 + TVD | 9.294
| StyleGAN-V2 + RMD | 8.759 |
| Ours (StyleGANV2+ DMD) | **5.879** |
---
Rebuttal Comment 1.1:
Comment: Since the author-reviewer discussion is getting closing soon, we believe we have addressed your questions and comment, and we hope to hear from you. Thank you for your kind consideration!
**Q. Difference from DynamicD [74]**
**A.** Our paper is inspired by the insightful observation in DynamicD [74] and appreciates its valuable observations, as stated in Lines 35-36 of the main paper. However, our method and DynamicD [74] are different in three aspects:
• DynamicD [74] focuses on a problem different from ours. Our method focuses on detecting the retardation of the discriminator and encouraging the discriminator to rapidly adapt to new data, which haven’t been explicitly considered by existing GANs methods. Differently, DynamicD [74] focuses on adjusting the discriminator model capacity according to various data scales: increasing the model capacity for large training data, while decreasing the layer width given limited data.
• DynamicD [74] does not mention continual learning, while our method proposes a new perspective, i.e., online continual learning to address GAN training.
• Although our method is fully automated, our method outperforms DynamicD [74] by a large margin (20.93%) on LSUN-Church (Table 2 in the main paper). | Summary: This paper proposes a new training method of generating adversarial networks from the perspective of online continuous learning. In order to address the challenges posed to the discriminator by the dynamic changes in the generated data distribution during training, the authors propose to detect whether the discriminator has slowed down its adaptation to newly generated data and dynamically mask its features to force it to quickly learn new knowledge. Experimental results on FFHQ, AFHQ, and LSUN-Church show that the proposed method is superior to the state-of-the-art methods in image generation.
Strengths: - Overall, the paper is well-organized and well-written, making it easy to follow. The analysis of the discriminator in the Pilot Study section is helpful for readers to understand the motivation of the work.
- The paper provides a comprehensive review of related work, which helps readers to better understand its contributions.
- The proposed method outperforms state-of-the-art methods, including diffusion models, in mainstream experiments according to the experimental results reported in the paper.
Weaknesses: - The notation used in the method description is confusing and inconsistent. For example, $\mathcal{D}$ and $\bar{D}$, $\theta^t$ and $\phi(t)$. Additionally, $\phi(t)$ is not included in discriminators as a parameter like $\theta^t$.
- The comparison of the visualization results is not significant. For instance, it is difficult to understand the improvements in the generated results of the proposed method compared to the baseline in Fig.1. Considering that this is a subjective evaluation, including objective evaluation metrics in this experiment would be helpful.
- The proposed method is kind of heuristic in nature, hence the technical novelty and contribution are limited. More advanced methods for online continual learning may be a better choice.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Considering that the proposed method requires continual detection of discriminator retardation, will this result in significant additional training overheads compared to baselines?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have not adequately addressed the **limitations of the proposed method** and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewers for the careful reviews and
constructive suggestions. We are encouraged that the reviewer agrees that our paper is "well-organized and well-written", "new training method ", "provides a comprehensive review", "Pilot Study section is helpful for readers to understand the motivation", and "outperforms state-of-the-art methods".
**Q1: The notation used in the method description.**
**A.** Thank you for pointing this out. We will replace $\bar{D}$ by $\bar{\mathcal{D}}$ to maintain notation consistency between the discriminator without masks and that with masks. We will include $\phi(t)$ in the discriminator like $\theta^t$ in the final version.
**Q2. Including objective evaluation metrics in Figure 1**
Thank you for your constructive suggestions. Firstly, we introduce FID $\downarrow$ to evaluate generation quality. The FID of our method is 3.299, suppressing the baseline (FID 3.810).
Secondly, we introduce the cosine similarity metric. We aim to enable the discriminator to pay attention to different local regions in samples at different steps, such that the discriminator adapts to the time-varying distributions of generated samples. We extract binary attention maps from the feature maps in the second row of Figure 1, and then calculate the cosine similarity of the attention maps between the current training step and the previous one. Lower value indicates better performance, i.e., attentive regions at the current step are more different from that at the previous one. The table below shows our method enforces the attentive regions of the discriminator to be more different at each training step than the baseline, better adapting to time-varying distribution of generated data. We will include objective evaluation results in the final version.
**Table 2**: Cosine similarity $\downarrow$ of attention maps between current and previous steps in Figure 1.
| | t1 | t2 | t3 |
|:--------:|:------:|:------:|:------:|
| Baseline | 0.8672 | 0.8242 | 0.8145 |
| Ours | **0.2871** | **0.4336** | **0.5488** |
**Q3: More advanced methods for online continual learning.**
**A.** We will explore more advanced methods in our future work, thank you for your suggestion. Yet, as recognized by Reviewer zHGZ, "Rather than designing a complex network architecture, their method is designed to be easily integrated into any existing discriminator or used in combination with data augmentation methods which increases the significance and broader impact of the work". Reviewer dcqD also agreed that we propose "a novel method for training Generative Adversarial Networks (GANs) based on online continual learning" which achieves "Promising image results" on "Multiple datasets".
We design such a simple and straightforward method, due to the specific challenges introduced by GAN models. Different from existing online continual learning methods for rapid adaption (e.g., for classification), GANs models posed new challenges in that the training GANs plays a min-max game between the generator and the discriminator. Sophisticated designs may increase the difficulties of optimizing the generator and the discriminator, leading to large instabilities in the training.
Different from [74] which explores the influence of various capacities of the discriminator on training,
we propose to detect the retardation of the discriminator and dynamically mask the discriminator to force it to learn fast, which is simple yet effective.
Nevertheless, we thank the reviewer's suggestions. We try to improve our discriminator retardation detection. In particular, if the Retardation value $\mathcal{R}_t$ of current time interval $t$ is higher than that of the previous one, the discriminator would be detected in retardation. With this new discriminator retardation detection, our method achieves 5.877 FID on AFHQ-Cat(256 $\times$ 256), which is on par with ours (5.879 FID) in the main text.
**Q4: Will the detection of discriminator retardation result in significant additional training overheads**
**A.** Our method does not result in significant additional training overheads. In particular, the retardation of the discriminator is typically not an abrupt, but smooth transition process during training. This enables our method to detect retardation of the discriminator sparsely, instead of after each training image. The additional time cost is 6 hours 11 minutes for the whole training on AFHQ in our experiments, where our method is integrated with StyleGAN-V2 and detects retardation at every 4 kimgs. When detecting at every 20k kimgs, our method additionally takes 1.3 hours.
**Q5: Limitations of the proposed method and potential negative societal impact**
**A.** Thank you for the comments. Due to the page limitation, we discuss the limitation of the proposed method in Sec. 5 of the Appendix:
Theoretical studies can make this work more comprehensive; however, we have not explored it in the paper, since it is beyond the scope of this paper. Moreover, while the proposed method can effectively improve the training of the CNN-based GANs models, combining our method with Transformer-based ones is left to be investigated in the future.
Potential negative societal impact is also discussed in Sec. 5 of the Appendix:
Like other generative models, our method can be misused for applications, such as Deepfake [1], where fake content is synthesized to deceive and mislead people, leading to a negative social impact. Nevertheless, many researchers have considered this problem, while exploring fake content detection and media forensics techniques. | Summary: In this paper, the authors propose a novel perspective for generative adversarial training via continual learning, which achieves better results on the generative quality for local details and better quantitative metrics. To realize continual learning, the authors propose two models, 1) the discriminator slow-down detection mechanism and 2) the dynamical masked discriminator for different time-steps of training.
Strengths: 1. The novel perspective of modelling GAN with continual learning is interesting and promising, which tries to improve GAN training on discriminators with fast adapting features.
2. The proposed method is simple and straightforward yet effective to be trained and probably would be general for any type of GAN network.
3. The model is well-explained, and the paper is well-written.
Weaknesses: 1. Even though the idea of continual learning for GAN is novel and exciting, it seems that the connection between continual learning and the dynamically masked discriminator is not well-explained.
2. It seems the improvements on the face dataset or animal face dataset shows marginal improvement; I would be more interested in the training on a more diverse dataset as the SOTA generative models are mostly capable of generating on large-scale datasets, such as imagenet1000.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please refer to the above weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitation is not discussed, however I would concern if the method can be directly scaled up to the large scale dataset generation, such as imagenet1000.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable comments. We are encouraged that the reviewer agrees that our work is “interesting and promising”, “novel and exciting”, “simple and straightforward yet effective to be trained”, “probably would be general”, “well-explained”, and “well-written”.
**Q1.The connection between continual learning and the dynamically masked discriminator.**
**A.** Thank you for the comments. Our method is inspired by the recent work on online continual learning for rapid adaptation. The study pointed out that data with time-vary distribution requires learning methods to have the ability of fast learning and quick adaption (see Line 47-55 in the main paper). For GANs, the distribution of the generated data drifts over time, as the generator evolves during training. This also requires the discriminator to fast learn new knowledge of incoming data.
However, online continual learning for rapid adaption has been underexplored [16, 6, 39]. Moreover, different from most online continual learning methods for rapid adaption, training GANs models plays a min-max game between the generator and the discriminator, posing new challenges. For example, the incoming data of the discriminator is unknown and uncertain, while the generation results of the generator depend on the discriminator and the discriminator learns from data generated by the generator. Yet, these challenges haven't been considered by existing online continual learning methods.
To address the challenges, we propose a new method for GAN models, which detects the discriminator retardation and masks the discriminator features. More specifically, masking the feature map of the discriminator is to break the original dependency of the discriminator on some local features that are important to distinguish historical data, and enforce the discriminator learn new knowledge of incoming data by rebuilding the dependency of remaining non-masked local features and incoming data.
**Q2: Scale up to the large-scale dataset generation, such as ImageNet-1000.**
**A.** Thanks for the valuable comments. Following your suggestions, we train our method on ImageNet-1000. Yet, the settings of SOTA GAN methods typically require a larger number of iterations (e.g., 150,000 iterations for BigGAN [5]) on ImageNet-1000, which takes more than two weeks to train from scratch with eight NVIDIA V100 GPUs [5]. Due to the time limitations and limited number of GPUs, we integrate our method with an intermediate checkpoint of the BigGAN model which has been trained for 100,000 iterations. We then further trained models from 100,000 to 107,500 iterations on ImageNet-1000 at 128$\times$128 resolution using two NVIDIA V100 GPUs.
Compared with the original BigGAN that achieves the FID $\downarrow$ of 12.213 at the 107,500$^{th}$ iteration, our method (i.e., DMD) improves the training of BigGAN, outperforming the original BigGAN by **8.196\%** (see the table below). This not only shows that our method benefits the training of BigGAN on large-scale datasets, but also demonstrates the flexibility and compatibility of our method in combination with GAN models such as pre-trained GAN models.
**Table**: Results of the proposed method on ImageNet-1000.
| Method | FID $\downarrow$ |
|:-------------------------------------:|:-----:|
| BigGAN | 12.213|
| Ours (BigGAN+ DMD) |**11.212** |
In addition, Table 2 in the main paper evaluates our method on a large dataset LSUN-Church, which contains 126K images and is used to challenge generative methods in recent GAN and diffusion work (e.g., [54,74]). Our method outperforms StyleGAN-V2 by 28.7\% on LSUN-Church, and surpasses a SOTA diffusion model LDM [54] by 23.9\%.
**Q3: The limitation.**
**A.** Thanks for your suggestions. Due to the page limitation, we have to discuss the limitation of our method in Sec. 5 of the appendix:
Theoretical studies can make this work more comprehensive; however, we have not explored it in the paper, since it is beyond the scope of this study. Moreover, while the proposed method can effectively improve the training of the CNN-based GANs models, combining our method with Transformer-based ones is left to be investigated in the future.
---
Rebuttal Comment 1.1:
Comment: thanks for the reply, my questions are partially addressed. But, I am still not convinced by the continual learning with the dynamically masked strategy. Here is the question for " treating generated samples throughout training as a stream of data slows down learning of D",
the intuition is still not clear here. I got the point that D would fall into some lazy easy pattern, and masked strategy can force them focus on some more detailed and harder features for discrimination. I am still doubt why this would connect to continual learning, which is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks.
---
Reply to Comment 1.1.1:
Comment: Thank you for your time and effort in evaluating our work.
**Q1. Why this would connect to continual learning, which is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge**
**A.** Thank you for the comments. Our work is in line with the emerging research direction: online continual learning towards rapid adaptation. Continual learning can be classified into two categories: offline and online [16], where offline continual learning mainly aims to mitigate forgetting. Recent online continual learning methods [16, 6, 39] focus on enabling **rapid adaptation to new incoming data**, where method [6] has revealed that *information retention (mitigate forgetting) and rapid adaption are conflicting objectives, requiring careful compromises*.
In other words, compared with offline continual learning, both methods [16, 6, 39] and our method address a more challenging but realistic problem. That is, (1) training data is seen only once; (2) the data is **not independent and identically distributed**; and (3) the distribution of data under **a fast-changing distribution shift**. This drives [16, 6, 39] and our method to focus on rapid adaption.
Reviewer dcqD agrees that we propose "a novel method for training Generative Adversarial Networks (GANs) based on online continual learning, addressing the persistent challenge of GAN training instability."
In addition, one of the most recent continual learning works [R1] also points out that it is unnecessary to remember all historical information, and "forgetting non-recurring information is not catastrophic”, which is in line with our work. We will clarify it in our final version.
[R1] Saurabh Kumar, Henrik Marklund, Ashish Rao, Yifan Zhu, Hong Jun Jeon, Yueyang Liu, and Benjamin Van Roy. Continual Learning as Computationally Constrained Reinforcement Learning. arXiv preprint arXiv:2307.04345, 2023.
**Q2. The question for "treating generated samples throughout training as a stream of data slows down learning of D"**
**A.** Our manuscript does not have such a statement. We guess this refers to "We propose to detect whether the discriminator slows down the learning of new knowledge in generated data by treating generated samples throughout training as a stream of data" (see L51 in the main paper). This does not mean that treating generated samples as a stream of data slows down the learning of D. Instead, it means that we treat generated samples as a stream, enabling us to detect whether the discriminator slows down learning new knowledge.
We will revise it and make it clear. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes Dynamically Masked Discriminator (DMD) which automatically adjusts the discriminator by dynamically masking the features when learning slows down, forcing the discriminator to learn new knowledge in the generated data. It consists of two modules: (1) discriminator retardation detection and (2) dynamic discriminator adjustment. The first module detects when the discriminator starts to learn slower (i.e. rely on old knowledge rather than learn new distributions of generated data). The second module dynamically assigns or removes masks to the features of the discriminator. The experiment results show that their method helps increase the quality of generated samples across diverse model architectures and datasets.
Strengths: * The paper is well-motivated and clearly presented.
* Rather than designing a complex network architecture, their method is designed to be easily integrated into any existing discriminator or used in combination with data augmentation methods which increases the significance and broader impact of the work.
* They perform extensive experiments on a wide range of datasets and baseline architectures against various existing methods that improve discriminator training. Experiment results show that their method significantly reduces the FID score when combined with state-of-the-art GAN architectures. Qualitative results demonstrate that the method helps generate higher-quality samples without artifacts.
* The paper is clearly organized and provided with sufficient technical and implementation details.
Weaknesses: * Masking seems to be an important design factor for the efficacy of this method. The authors should elaborate on why switching from non-mask training to mask training is more desirable than continuously changing the mask ratio over time.
* There exists minor typos and grammar mistakes throughout the paper.
* The paper lacks the limitation section, which I feel would improve the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * What is the benefit of strictly separating the non-mask and mask training stage rather than changing the mask ratio over time?
* Has the authors experimented with higher-resolution images? This would increase the impact of this work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not mention the limitation of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their insightful feedback and constructive comments. We are encouraged by the reviewer's positive comments, such as “well-motivated”, “clearly organized”, “easily integrated into any existing discriminator”, “significance and broader impact of the work”, "extensive experiments", "improve discriminator training", and "significantly reduces the FID score".
**Q1: Comparison with changing the mask ratio over time.**
**A**. Thanks for the valuable comments. Following your suggestion, we build a baseline, namely Continualy-Changed-mask-ratio Discriminator (CCD), which replaces our dynamic discriminator adjustment module by gradually increasing the mask ratio from 0.1 to 0.9 or decreasing the mask ratio from 0.9 to 0.1 over time.
The table below shows that our method outperforms CCD, since CCD increases instabilities in GAN training. More specifically, the process of GAN training is to play a min-max two-player game between the generator and discriminator, which is more unstable than typical classification problems. Compared with our method, CCD causes the discriminator's discrimination ability to be more frequently changed and uncertain, making it more difficult for the generators to fool the discriminator. The deteriorated performance of the generator would further negatively affects the training of the discriminator. Instead, when the improvement of the discriminator slows down, our method switches from non-mask training to mask training, otherwise maintains masking/non-masking at a time interval, which provides more stable training than CCD.
**Table**: Comparison with changing the mask ratio over time on AFHQ-Cat (256$\times$256 pixels).
| Method | FID $\downarrow$ |
|:-------------------------------------:|:-----:|
| StyleGAN-V2 | 7.924 |
| StyleGAN-V2 + CCD | 8.441 |
| Ours (StyleGAN-V2+ DMD) | **5.879** |
In addition, we observe that continually changing the mask ratio requires careful designs on the decay/growth strategy of the mask ratio, and needs to enforce it to be adaptive to retardation detection. We thank the reviewer's comments and think temporally changing the ratio can be a promising direction. We will explore this further in future work.
**Q2: Minor typos and grammar mistakes.**
**A.** Thank you. We will correct typos and grammar mistakes.
**Q3: Has the authors experimented with higher-resolution images?**
**A.** Thank you for the constructive comments. We conduct experiments on the high-resolution AFHQ-Cat dataset, where the resolution of an image is 512 $\times$ 512 pixels. By integrating our method with the SOTA method StyleGAN-V2, our method reduces the FID $\downarrow$ of StyleGAN-V2 by a margin of **18.77\%** (from 4.3160 to 3.5061) on AFHQ-Cat (512 $\times$ 512), effectively improving the performance of StyleGAN-V2 on higher-resolution images.
**Q4: The limitation section.**
**A.** Thanks for your suggestions. Due to the page limitation, we have to discuss the limitation of our method in Sec. 5 of the appendix:
Theoretical studies can make this work more comprehensive; however, we have not explored it in the paper, since it is beyond the scope of this study. Moreover, while the proposed method can effectively improve the training of the CNN-based GANs models, combining our method with Transformer-based ones is left to be investigated in the future.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your detailed response. The authors have addressed all of my concerns. After reading comments from other reviewers and your feedback, I plan to keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your valuable and positive feedback!
Comment: Dear Reviewer zHGZ,
We sincerely thank the reviewer for your insightful comments and positive feedback. Your constructive review comments help us significantly improve the quality of the manuscript and enhance the strength of our work.
Best regards,
The Authors. | null | null | null | null | null | null |
AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis | Accept (poster) | Summary: The paper introduces a new task called "real-world audio-visual scene synthesis", which involves synthesizing new videos with spatial audios along arbitrary novel camera trajectories in a scene. The proposed NeRF-based approach integrates a prior knowledge of audio into audio generation and associates it with the 3D visual environment. The proposed model is able to learn a sound source-centric acoustic field with the introduction of the coordinate transformation module that expresses a view direction relative to the sound source. Additionally, a high-quality Real-World Audio-Visual Scene (RWAVS) dataset is demonstrated along with the model.
Strengths: 1. The paper presents an interesting new task that associates the audio with 3D geometry and material properties of scenes and the video demo demonstrates the effectiveness of the proposed method.
2. The paper compares the proposed method with similar methods on both the RWAVS dataset and the SoundSpaces dataset and surpasses the previous state-of-the-art method.
3. The method proposes a novel dataset that can be further utilized in similar tasks.
Weaknesses: 1. The novelty of the proposed network seems to be limited. To tackle the new task, AV-NeRF combines an audio-NeRF and visual-NeRF and introduces an AV-mapper and the novelty mostly lies in the AV-mapper based on my current understanding.
2. The paper introduces a new task called "real-world audio-visual scene synthesis”. However, as mentioned in the paper, this work currently focuses on static scenes with a single fixed sound source, which is a bit far from the word “real-world” since in real-world, there can be much more sound sources.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. While the method is spatially-aware, the method seems to be less effective when the distance of the sound source is changing from the demonstration of the video, could you provide more results?
2. The sound source demonstrated in the video is the same, could the sound source be changed, or the sound-source device has certain requirements? If so, the proposed method can be limited.
3. As mentioned in the weaknesses, please clarify the novelty of the proposed method again.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As mentioned in the paper, this work currently focuses on static scenes with a single fixed sound source. To better suit the word “real-world”, the method needs to be improved and adapted to multiple sound-source situations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your insightful comments. Below are our responses to specific questions.
### Weakness
**W1: Novelty of AV-NeRF.** In addition to AV-Mapper, our paper introduces two essential components -- the innovative Audio-NeRF architecture and a novel coordinate transformation mechanism.
We would like to clarify that it is non-trivial to extend NeRF to the audio domain.
Our A-NeRF pipeline is carefully designed to capture the influence of the receiver's position and direction on sound perception. It consists of two separate Multi-layer Perceptrons (MLP) to learn the acoustic field, with the first MLP parameterizing the energy attenuation with regard to the distance between the sound source and receiver and the second one modeling the channel difference of stereo sound caused by the receiver's viewing direction. By disentangling the acoustic field with these two modules, A-NeRF enables distance-sensitive and direction-aware audio synthesis.
Another contribution of our work is the proposed coordinate transformation. By studying the nature of human's perception of the sound source -- human perception of the sound direction is based on the relative direction to the sound source instead of the absolute direction -- we propose replacing the absolute coordinate system, which is commonly used in NeRF, with a relative coordinate system. By expressing the viewing direction of a camera pose relative to the sound source, we encourage A-NeRF to learn a sound source-centric acoustic field and establish a strong correspondence between the viewing direction and the spatial audio.
**W2: Multiple sound sources.** We agree that there are multiple sound sources in real-world scenarios, and the initial setting of this work was simplified. To address this limitation, we extend both the RWAVS dataset and the AV-NeRF model to support multi-source scenes. Specifically, we collect two multi-source scenes, adhering to the recording settings outlined in Section 5.1. The only modification we make is placing two sound sources instead of a single one in the environment, thus creating multiple sound sources. We extend AV-NeRF by stacking multiple equal A-NeRF modules to support the parameterization of multi-source acoustic fields. AV-NeRF generates acoustic masks for each sound source separately. We conduct experiments in these multi-source scenes and present the performance of AV-NeRF and other baselines in the table below.
|Methods|Multi-source (MAG) | Multi-source (ENV)|
|:-:|:-:|:-:|
|Mono-Mono | 1.949 | 0.172 |
Mono-Energy | 0.533 | 0.075 |
Stereo-Energy | 0.527 | 0.073 |
INRAS | 0.472 | 0.078 |
NAF | 0.401 | 0.080 |
Ours | **0.282** | **0.063** |
As presented in the table, AV-NeRF outperforms other baselines in both MAG (0.282) and ENV (0.063) metrics. The experimental results clearly demonstrate the proposed method's capability to effectively handle multiple sound sources. In our revision, we will augment the RWAVS dataset with multi-source scenes and include the corresponding results in the main paper.
### Question
**Q1: Distance effects.** In response to the reviewer's question, we present additional results in Figure 2 (please refer to
the attached PDF file in the global response) that demonstrate AV-NeRF's capability to generate distance-aware auditory effects. For each scene, we select a fixed source audio and synthesize new binaural audio at varying positions and distances from the sound source. The first row displays the rendered images, while the second row showcases the corresponding rendered audio. As illustrated in the figure, the amplitude of the rendered audio diminishes as the distance between the source and the camera increases, and conversely, it increases as the camera approaches the source.
**Q2: Sound source.** There are no specific requirements for the sound source device. It can be any object that emits sound, including phones, laptops, instruments, or even humans. In our case, we utilize a speaker as the sound source to simplify the data recording process.
**Q3: Please refer to W1.**
### Limitation
**L1: Please refer to W2.**
---
Rebuttal Comment 1.1:
Title: Comment
Comment: Thank you for the clarification and the efforts you put into the rebuttal. All the questions are responded and the novelties are clarified in the response. The spatial awareness is demonstrated in the rebuttal and the effectiveness is proved. I would incline to raise my rating.
---
Reply to Comment 1.1.1:
Comment: We are grateful for the reviewer's encouraging comments. We will incorporate multi-source results and spatial awareness figures into our revised paper. | Summary: The paper introduces a novel task called real-world audio-visual scene synthesis to generate novel views and spatial audio for any camera trajectories. The system contains a visual NeRF to synthesize novel view images, an audio NeRF to generate acoustic masks, and an audio-visual mapper to enable the visual information to enhance the A-NeRF. The authors also collect a new real-world dataset to demonstrate the task's importance and for benchmarking. Experiments on synthetic and real datasets show the effectiveness of the proposed method.
Strengths: 1. The task of audio-visual scene synthesis in the real world is essential. To my best knowledge, it is the first work that aims to generate both video and spatial audio in a real environment.
2. Multimodal data collection for the real scene is challenging and requires lots of effort. I could see the new real data would be very useful for future research.
3. Each component in the model pipeline is well-motivated and makes sense.
4. The paper is generally well-written and easy to follow.
5. Extensive experiments and ablation studies are done on real and synthetic data to show the effectiveness of the proposed method.
Weaknesses: 1. As the authors mention in the limitation, the current approach mainly works for static scenes with a single fixed sound source. However, as the initial exploration, the current setup makes sense.
2. It is often challenging to interpret and understand the quantitative results of generated audio. While the proposed method outperforms baselines across all quantitative metrics. If possible, I would suggest authors add human subjective evaluations in the future to see the perceptual difference.
3. As a new benchmark, it would be great to show some failure cases for the current method to indicate potential improvement in the future.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. If I understand correctly, the source mono audio is fixed for each scene in the RWAVS dataset. Have you tried to use another source audio during the inference time, and does it sound as good as before?
2. Following the previous question, I guess one of the main differences between the real world and simulated data is that in the simulation platform, we could obtain dense impulse responses in arbitrary positions, but in the real world, it is obviously easier to play source audio and collect data along a trajectory continuously. However, how could we effectively validate the correctness at the inference stage if we are using new source audio without spatial audio ground truth?
3. Since the goal is to generate video frames and spatial audio, the current A-V mapper aims to extract useful information from visuals to help A-NeRF. Is it also possible to use it to extract useful information from the audio side to enhance the synthesized visual frame?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your constructive comments and greatly appreciate your strong positive acknowledgment of our work. Below, we address each specific question.
### Weakness
**W1: Single sound source.** In our paper, we initially focused on a simplified scenario with a single sound source, acknowledging that it may not fully represent real-world complexities. We appreciate the reviewer's understanding that "as the initial exploration, the current setting makes sense." To address this limitation, we extend both the RWAVS dataset and the AV-NeRF model to support multi-source scenes. Specifically, we collect two multi-source scenes, adhering to the recording settings outlined in Section 5.1. The only modification we make is placing two sound sources instead of a single one in the environment, thus creating multiple sound sources. We extend AV-NeRF by stacking multiple equal A-NeRF modules to support the parameterization of multi-source acoustic fields. AV-NeRF generates acoustic masks for each sound source separately. We conduct experiments in these multi-source scenes and present the performance of AV-NeRF and other baselines in the table below.
|Methods|Multi-source (MAG) | Multi-source (ENV)|
|:-:|:-:|:-:|
|Mono-Mono | 1.949 | 0.172 |
Mono-Energy | 0.533 | 0.075 |
Stereo-Energy | 0.527 | 0.073 |
INRAS | 0.472 | 0.078 |
NAF | 0.401 | 0.080 |
Ours | **0.282** | **0.063** |
As presented in the table, AV-NeRF outperforms other baselines in both MAG (0.282) and ENV (0.063) metrics. The experimental results clearly demonstrate the proposed method's capability to effectively handle multiple sound sources. In our revision, we will augment the RWAVS dataset with multi-source scenes and include the corresponding results in the main paper.
**W2: Human subjective evaluation.** We agree with the reviewer's valid concern regarding the interpretability and comprehension of the quantitative results for the generated audio. To address this, we plan to conduct a human subjective evaluation in the future to perceptually assess the quality of the generated content.
**W3: Failure cases.** As suggested by the reviewer, we include failure cases in Figure 1 (please refer to the attached PDF file in the global response). In the figure, we present the rendered images, synthesized binaural audio, and ground-truth audio. AV-NeRF makes wrong acoustic predictions when the audio-visual scene involves noticeable noise or the AV-Mapper can not extract reliable material and geometry information from the visual space.
1. Because we work for **real-world** audio-visual learning, there inevitably exist various types of noise in a scene, e.g., sounds from refrigerators, air conditioning, wind, or even workers make noise when collecting data. These ambient noises are recorded by the microphone and are present in the ground-truth binaural audio, which hinders AV-NeRF from accurately learning the acoustic field. In the Figure 1 (a), the ground-truth audio (used for training) marked with red boxes contains noticeable noise, resulting in inaccurate audio rendering.
2. AV-NeRF relies on the AV-Mapper to extract reliable material and geometry information from input images, enabling a comprehensive understanding of the audio-visual environment. However, in cases where the AV-Mapper fails to extract meaningful information from certain images (e.g., images of plain walls or whiteboards), AV-NeRF predicts erroneous acoustic masks. In Figure 1 (b), the rendered audio marked with black boxes displays inconsistent waveforms compared to the ground-truth audio.
### Question
**Q1: Replace source audio.** For quantitative evaluation, we use a fixed source audio input for AV-NeRF, which guarantees that the source audio and recorded ground-truth audio share the same auditory content. However, during the general inference process, AV-NeRF can accept any source audio of interest, allowing us to generate binaural audio with rich acoustic properties. For instance, we collect an outdoor scene in the RWAVS dataset using a music song as the source audio, and train AV-NeRF with both the source music and the recorded music. After completing the training, we feed AV-NeRF with arbitrary source audio to synthesize new binaural audio. In our demo videos, we replaced the music with human speech and successfully synthesized coherent binaural sounds. The generated speech audio demonstrated consistency with the camera movements acoustically.
**Q2: Unavailable ground-truth audio.** We can synthesize pseudo ground-truth audio to sidestep the unavailability of ground-truth audio when we use new source audio to evaluate the correctness of an approach. Although ground-truth audio is not directly available for these new samples, we have access to the original source audio clip $a_s$ and the binaural audio clip $a_t$ from the dataset. Leveraging a deep neural network [1], we can estimate the corresponding impulse response from $a_s$ and $a_t$. By convolving the estimated impulse response with the new source audio, we synthesize the pseudo ground-truth audio. This approach effectively provides us with ground-truth binaural audio for performance evaluation.
**Q3: Enhance visual synthesis.** We agree with the reviewer's suggestion that extracting valuable auditory information can aid in visual learning. As shown in [2], acoustic information can enhance the quality of visual reconstruction. Moreover, [3] demonstrates that acoustic features extracted from sound echoes can assist in spatial perception tasks, such as monocular depth estimation and surface normal estimation. It would be very interesting to explore audio-assisted vision learning in the future.
Please refer to the global rebuttal for reference.
---
Rebuttal Comment 1.1:
Title: Post Rebuttal
Comment: Thank the authors for the detailed responses. My questions are well addressed. I will keep my score and recommend accepting the paper.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your recommendation of our paper! Your constructive comments and suggestions have helped us improve our paper a lot. | Summary: Inspired by the task of novel view synthesis, this paper additionally considers the audio modality and thus proposes an interesting new task: real-world audio-visual scene synthesis. Specifically, the task is to synthesize new videos with corresponding spatial audio along arbitrary novel camera trajectories by learning from a video recording of an audio-visual scene. To solve this problem, the authors resort to AV-NeRF, an A-NeRF for spatial audio synthesis, and a V-NeRF for video synthesis, respectively. For V-NeRF, they directly use the Vallina NeRF model. For A-NeRF, they explore extracting 3D geometry and material properties from the corresponding visual cues of V-NeRF. Moreover, a coordinate transformation module is proposed for more accurate spatial sound reconstruction. Better performance on a self-collected real-world dataset RWAVS and a simulation-based dataset SoundSpaces demonstrates the effectiveness of the proposed method AV-NeRF.
Strengths: 1. This paper proposes an interesting task: real-world audio-visual scene synthesis, which extends the task of novel view synthesis from only visual modality to multiple modalities.
2. The A-NeRF architecture introduced in this paper is well-designed. It considers sound propagation and view directions, which is quite reasonable. Besides, the coordinate transformation is also shown effective through comprehensive experiments.
3. Exhaustive experiments show the superior performance of AV-NeRF over several strong baseline methods.
4. The whole paper is well-written and presented in a coherent manner.
Weaknesses: 1. The design of the most important module AV-Mapper is not reasonable. First, the authors just use a pretrained ResNet-18 network to extract RGB and depth features from RGB and depth images. I'm curious how the ResNet-18 network could obtain material information from RGB images since the semantics can't imply the material properties at all. For a black desk, I can't tell if it's made of wood or steel just from the appearance. Actually, the authors can do some testing experiments, such as conducting color jittering and seeing what happens, or cross-scene testing to see whether the ResNet-18 network can extract materials and geometry information. Second, depth images can't inform all the geometry information. For example, the occlusion, which significantly influences sound propagation, can't be reflected on depth maps.
2. For spatial audio modeling, a useful metric that can really reflect the necessity of distance and view directions is required. Can the distance help to determine the energy of sounds on the receiver side? It's impossible to tell only from MAG and ENV metrics. The quantitative result w.r.t. **a new useful metric** should be provided in the paper. Similarly, I can't tell whether the view directions are useful for modeling the relative energy of left and right channels. I also wonder which one is more important for spatial audio modeling.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How can the network obtain the distance between the camera and the sound source only from their positions? As we all know, distance calculation includes square and square roots, which are hard to approximate with simple neural networks.
2. The coordinate system in each scene. How to determine the origin?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. The biggest limitation is the inappropriate modeling of AV-Mapper. The authors claim this module can obtain the material and geometry information from V-NeRF. However, from the perspective of this reviewer, such important information can't be inferred from the current design. Please refer to the weaknesses for details.
2. The current evaluation protocols can't validate the effectiveness of some modules employed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your thoughtful review and valuable feedback. We address specific questions below.
### Weakness
**W1: Rationality of AV-Mapper.** Great question! We leverage RGB and depth images as **implicit** indicators of environmental material properties and geometry. While these images enable the model to perceive the environment implicitly, it is important to note that achieving a precise one-to-one mapping from images to material properties or geometry is not guaranteed for AV-Mapper. Certain corner cases, such as a black desk and object occlusion as mentioned by the reviewer, can lead to the failure of AV-Mapper.
Nevertheless, existing studies have demonstrated the feasibility of inferring material and geometry information from images. For instance, back in 2002, Varma and Zisserman [1] proposed a filter-based approach for material classification in images. More recently, in 2017, Zhang et al. [2] successfully recognized material and shape attributes using visual data. Moreover, there is a body of audio-visual research indicating that extracted geometry and material information via ResNet networks can contribute to audio synthesis [3, 4]. Our paper further supports this concept. As illustrated in Table 2 (left) of our paper, the inclusion of AV-Mapper in AV-NeRF can improve the MAG score by 16\% from 1.791 to 1.504 and enhance the ENV score by 3\% from 0.150 to 0.145.
**W2: Necessity of distance and direction.** Thanks for the suggestion! To further validate the necessity of distance and direction, we conducted additional ablation studies. We intentionally exclude positional and directional coordinates when representing an auditory scene and evaluate the performance without these inputs. The results are presented below. AV-NeRF achieves a MAG metric score of 1.504 and an ENV metric score of 0.145 when both position and direction information are utilized. Removing either positional or directional input leads to a performance drop between 2\% to 21\%. In the absence of both positional and directional inputs, performance degrades further to 1.822 on the MAG metric and 0.156 on the ENV metric. These outcomes distinctly affirm the indispensable role of distance and direction coordinates in acoustic modeling.
For metrics, we follow existing audio-visual learning work [5, 6, 7], utilizing MAG and ENV metrics to evaluate the quality of generated binaural audio. MAG and ENV metrics reflect the generation quality in time-frequency and time domain, respectively. We appreciate that the reviewer suggested developing a novel metric to quantify the influence of distance and direction on acoustic modeling and to establish their significance. It would be a very interesting idea to explore in the future.
|Methods (Position) |Methods (Direction) |Overall (MAG) | Overall (ENV) |
|:-:|:-:|:-:|:-:|
| | | 1.822 | 0.156 |
| | ✓| 1.817 | 0.155 |
| ✓| | 1.688 | 0.148 |
| ✓| ✓| **1.504** | **0.145**|
### Question
**Q1: Distance between the camera and the sound source.** Audio-NeRF parameterizes the acoustic scene by learning a mapping from 5D coordinates $(x,y,z,\theta,\phi)$ to corresponding acoustic masks $\mathbf{m}_m, \mathbf{m}_d$ (Equation 5). The prediction of these acoustic masks solely relies on the query pose $(x,y,z,\theta,\phi)$. Audio-NeRF is designed to capture and learn acoustic effects caused by the distance, such as energy attenuation. However, it is not obligated to directly model and approximate the specific distance between the camera and the sound source. If we have misunderstood the reviewer's question, we kindly request further clarification, and we are more than willing to address any questions you might have.
**Q2: Origin of the coordinate system.** We describe below the process of establishing the coordinate system for our camera poses $P=\{p_1, p_2, \dots, p_N\}$, where each pose is denoted as $p_i = (x_i, y_i, z_i, \theta_i, \phi_i)$, and $N$ represents the total number of camera poses. To ensure uniformity, we normalize the 3D coordinates $(x_i, y_i, z_i)$ to be within the range of $[-1, 1]\times[-1, 1]\times[-1, 1]$. This normalization is achieved by dividing each coordinate by the maximum value of all corresponding coordinates along the same axis. For instance, we normalize $x_i$ as $x_i^* = \frac{x_i}{\max_{k=1}^{N}|x_k|}$. This coordinate normalization practice is commonly used in training NeRF. Once all camera poses are normalized, we designate $(0, 0, 0)$ as the origin of our coordinate system.
### Limitation
**L1: Please refer to W1.**
**L2: Please refer to W2.**
[1] Varma, Manik, and Zisserman, Andrew. "Classifying images of materials: Achieving viewpoint and illumination independence." ECCV. 2002.
[2] Zhang, Zhoutong, et al. "Generative modeling of audible shapes for object perception." ICCV. 2017.
[3] Chen, Changan, et al. "Learning audio-visual dereverberation." ICASSP. 2023.
[4] Singh, Nikhil, et al. "Image2reverb: Cross-modal reverb impulse response synthesis." ICCV. 2021.
[5] Gao, Ruohan, and Grauman, Kristen. "2.5 d visual sound." CVPR. 2019.
[6] Xu, Xudong, et al. "Visually informed binaural audio generation without binaural audios." CVPR. 2021.
[7] Chen, Changan, et al. "Novel-view acoustic synthesis." CVPR. 2023.
---
Rebuttal Comment 1.1:
Title: I have read authors' rebuttal
Comment: Thanks so much for your detailed response, which has already addressed most of my concerns. I'm happy to accept this paper and therefore decide to raise my score. Please include these additional results and discuss the limitations of AV-Mapper in the revision. By the way, for the distance between the camera and the sound source, I mean the root or square root operation may be hard to approximate with simple neural networks (just two layers for example), and wonder about the detailed neural network architectures.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for recommending the acceptance of our paper! We will revise the paper to include additional results and discuss AV-Mapper’s limitations.
We agree with the reviewer that the root or square root is difficult to approximate with simple networks. However, AV-NeRF does not need to explicitly model the distance between the camera and the sound source. Therefore, this issue does not apply to AV-NeRF.
For model architectures, we provide a detailed description below:
1. A-NeRF consists of two Multilayer Perceptrons (MLPs), each with four linear layers and an additional residual connection. The width of each linear layer is set to 128 for the RWAVS dataset and 256 for the SoundSpaces dataset.
2. In V-NeRF, we design a 64-width 2-layer MLP for density modeling and a 64-width 3-layer MLP for color modeling.
3. The AV-Mapper is implemented as a 3-layer MLP, with hidden dimensions decreasing gradually from 512 to 128.
4. For all MLPs, we use ReLU as the activation function.
We have provided the detailed information in the appendix, but we are happy to move it to the main paper if that is preferred. For reproducibility, we will release our source code, data, and models. | Summary: This paper describes an intriguing task that focuses on the synthesis of new perspectives of real-world audiovisual scenes. The task is to synthesize a new video with spatial audio along an arbitrary novel camera track in an audiovisual scene given a video recording of that audiovisual scene. A data acquisition system is built and a Real-World Audio-Visual Scene (RWAVS) dataset is collected in this paper.
First, the paper proposes an acoustically aware audio generation module that integrates prior knowledge of audio propagation into NeRF, thus relating audio generation to the 3D geometry of the visual environment. In addition, a coordinate transformation module that represents the direction of observation with respect to the sound source is proposed. This directional transformation aids the model in learning the source-centered sound field. This paper's superiority is demonstrated by qualitative and quantitative results.
Strengths: 1. The synthesis of new views on audiovisual scenes in the real world is an interesting task that contributes to a better understanding of the world and the dataset presented can contribute to the development of the field.
2. The paper is well written and each module is clearly presented and easy to understand.
3. The proposed AV-Mapper and the coordinate transformation mechanism can effectively fuse visual geometric information with audio information and effectively express sound direction.
Weaknesses: 1. The vanilla NeRF [15] structure is used in this paper's V-Nerf module, which is a relatively slow training and sampling model. Existing acceleration models for Nerf can already significantly improve training and sampling speed, and a better model may be able to achieve higher accuracy in this paper.
2. How does the proposed model handle sounds that are not within the field of view?
3. There are numerous audio-visual datasets available, such as the AVSBench (https://github.com/OpenNLPLab/AVSBench). What is the proposed method's performance on this dataset?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: As above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: As above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your helpful suggestions and encouraging comments. We address specific comments below.
### Weakness
**W1: NeRF acceleration.** We concur with the reviewer's observation that one limitation of the vanilla NeRF is its slow rendering speed. In our study, we actually employ acceleration techniques for NeRF to enhance both the training and rendering processes. Sorry for the misunderstanding! We would like to further clarify the implementation details here. Specifically, we incorporate the `tiny-cuda-nn` library, which facilitates rapid NeRF training and querying [1, 2]. Furthermore, we utilize `nerf-studio` [3] to enhance rendering quality, as detailed in L23-L28 of the supplementary material. `nerf-studio` offers support for camera pose refinement, image appearance conditioning, hash encoding, and proposal sampling.
**W2: Out-of-view sound.** Audio-NeRF parameterizes the acoustic scene by learning a mapping from 5D coordinates $(x,y,z,\theta,\phi)$ to corresponding acoustic masks $\mathbf{m}_m$ and $\mathbf{m}_d$ (see Equation 5). The prediction of acoustic masks is solely based on the query pose $(x,y,z,\theta,\phi)$. The RGB and depth images input into AV-Mapper serve as implicit indicators of the environment's material properties and geometry. Therefore, the absence of sound sources in the field of view has no impact on mask prediction.
**W3: Results on AVSBench dataset.** We appreciate the reviewer for proposing the AVSBench dataset as a valuable resource. We acknowledge the significance of this dataset within the audio-visual community and will cite it in our revised manuscript. However, we would like to note that the AVSBench dataset may not align with our specific data requirements. Our proposed AV-NeRF model necessitates access to ground-truth audio data from both the sound source and the sound receiver. Regrettably, the videos provided by the AVSBench dataset lack the essential ground-truth sound information pertaining to the sound source. Furthermore, the videos encompass dynamic scenes, posing challenges for accurate camera pose estimation. Due to these limitations, conducting experiments using the AVSBench dataset becomes impractical for our study.
[1] Müller, Thomas. Tiny CUDA Neural Network Framework.
[2] Müller, Thomas, et al. "Instant neural graphics primitives with a multiresolution hash encoding." ACM Transactions on Graphics (ToG) 41.4 (2022): 1-15.
[3] Tancik, Matthew, et al. "Nerfstudio: A modular framework for neural radiance field development." ACM SIGGRAPH 2023 Conference Proceedings. 2023.
---
Rebuttal Comment 1.1:
Comment: My concerns have been addressed.
My only concern is the reproducibility of the proposed method. Therefore, I keep my initial rating.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for your helpful comments. For reproducibility, we will release our source code, data, and models. | Rebuttal 1:
Rebuttal: We would like to express our gratitude to all the reviewers for their valuable comments and feedback. We address the specific questions of each reviewer individually. Additionally, we attach a PDF file containing some figures in response to certain reviewers.
**To reviewer 4psZ:** we show some additional results in Figure 2 of the attached one-page PDF file.
**To reviewer iWmP:** we show failure cases in Figure 1 of the attached one-page PDF file. Due to the word limit of rebuttal, we place the reference paper here:
[1] Richard, Alexander, Peter Dodds, and Vamsi Krishna Ithapu. "Deep impulse responses: Estimating and parameterizing filters with deep networks." ICASSP. 2022.
[2] Luo, Andrew, et al. "Learning neural acoustic fields." NeurIPS. 2022.
[3] Gao, Ruohan, et al. "Visualechoes: Spatial image representation learning through echolocation." ECCV. 2020.
Pdf: /pdf/caaaddc61d1a72cbe4b48a494f0fddd37b4df66d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes an audio-visual (AV-) NeRF model to synthesize binaural audio masks at novel poses by using audio-visual samples from a video walkthrough of a 3D scene. The synthesized audio masks can be convolved with any arbitrary anechoid audio signal to retrieve the corresponding binaural audio at the novel poses. Towards that goal, the paper proposes a model that uses the learned density information from a visual (V-) NeRF model to provide geometric and color information at a novel pose. Given this visual information, an audio (A-) NeRF model predicts masks that can be further used to generate the synthesize the binaural audio for a monaural sound source. Additionally, the paper proposes an alternate parameterization to the direction component of the camera pose parameters that accounts for the omnidirectionality of most sound sources. The paper also collects a realworld dataset for the task. The paper evaluates on both simulated and realworld data, compares against multiple baselines and reports impressive results.
Post author-reviewer discussion: I have read the rebuttal. It addresses my concerns. I will keep my score and recommend accepting the paper.
Strengths: 1, The paper proposes a new AV-NeRF model for binaural audio synthesis that leverages learned color and geometry information for novel poses from a V-NeRF model.
2. The alternate parameterization for the camera pose parameters is also a useful contribution that captures the physics of how sound travels from a omnidirectional sound source.
3. The paper also introduces a new realworld dataset for the task, which could be a valuable contribution for the community.
4. The paper evaluates on both simulated and realworld data, compares against multiple baselines and reports impressive results.
Weaknesses: 1. The paper doesn't compare with recently published ViGAS [14] model. The paper argues that ViGAS needs GT images and is limited to a few viewpoints. However, the images rendered by a standalone V-Nerf can be used as input to ViGAS (since it's not a contribution of this paper anyway), and ViGAS can be trivially adapted to render spatial sounds at arbitrary points.
Minor:
1. L131: [40] generalizes to new scenes, unlike the model proposed in this paper (also pointed out by the authors in limitations). Since there are no experiments on novel scenes, this statement looks unsubstantiated.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would like to request the authors to address the concerns I mentioned in 'Weaknesses'.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper already discusses its limitations and societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your constructive comments and encouraging remarks. We address specific comments below.
### Weakness
**W1: Compare with ViGAS.** Because ViGAS is a concurrent work to our paper, we did not compare AV-NeRF with ViGAS in our submitted paper. However, in response to the reviewer's suggestion, we conduct an evaluation of ViGAS on the RWAVS dataset for performance comparison. We utilize the official open-source code available on GitHub and make minimal adjustments to the framework, enabling support for arbitrary camera poses. The obtained results of ViGAS's performance in each scene, as well as the overall performance, are presented below.
| Methods | Office (MAG) | Office (ENV) | House (MAG) | House (ENV) | Apartment (MAG)|Apartment (ENV)|Outdoors (MAG)|Outdoors (ENV) |Overall (MAG)|Overall (ENV)|
|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|INRAS | 1.405 | 0.141 | 3.511 | 0.182 | 3.421 | 0.201 | 1.502 | 0.130 | 2.460 | 0.164 |
|NAF | 1.244 | 0.137 | 3.259 | 0.178 | 3.345 | 0.193 | 1.284 | 0.121 | 2.283 | 0.157 |
|ViGAS | 1.049 | 0.132 | 2.502 | 0.161 | 2.600 | 0.187 | 1.169 | 0.121 | 1.830 | 0.150 |
|Ours | **0.930** | **0.129** | **2.009** | **0.155** | **2.230** | **0.184** | **0.845** | **0.111** | **1.504** | **0.145** |
We can see that ViGAS achieves pretty good performance, and it outperforms both INRAS and NAF, serving as a strong baseline.
Our approach is better than ViGAS in terms of both MAG (1.504 compared to ViGAS's 1.830) and ENV (0.145 compared to ViGAS's 0.150). These results further underscore the effectiveness of our method in accurately capturing the underlying acoustic field.
**W2: Task definition.** We agree with the reviewer's point regarding the statement in our task definition. [1] can generalize to new scenes. In our revision, we will modify the task statement to address this concern.
[1] Majumder, Sagnik, et al. "Few-shot audio-visual learning of environment acoustics." NeurIPS. 2022.
---
Rebuttal Comment 1.1:
Comment: The rebuttal addresses my concerns. I will keep my score and recommend accepting the paper.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for recommending the acceptance of our paper. We sincerely appreciate your constructive comments. We will include ViGAS's results in our revised paper. | null | null | null | null | null | null |
Beyond MLE: Convex Learning for Text Generation | Accept (poster) | Summary: Based on the ideas of convex optimization, the paper proposes a new loss function for the training of text generation models. The experimental results show that it has certain advantages over MLE, whether in autoregressive models or non-autoregressive models.
Strengths: 1. The paper has done a relatively detailed theoretical derivation;
2. The final obtained loss is simple and practical;
3. Experimental results show its superiority over MLE in both autoregressive and non-autoregressive models;
4. It has narrowed the gap between greedy search and beam search.
Weaknesses: 1. Some key experimental hyperparameters were not provided, such as the value of T, which I could not find in the main text or the appendix;
2. The experimental part only compared with MLE, without comparing other loss functions (such as other classification losses used to approximate accuracy);
3. The notation is a bit confusing (for example, k is used in both formula 11 and theorem 3, but the meanings are not the same);
4. Since the goal of the Convex Loss is one-hot rather than learning distribution, it seems that it can only be used to improve the greedy/beam search, and is not applicable to the decoding strategy of random sampling.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: The original goal of Convex Loss is to learn one-hot, which is more consistent with the greedy/beam search decoding strategy. However, this contradicts gradient-based optimizers, so the final loss balances the goal of one-hot and gradient, which is formula 12. My questions are:
1. Compared to the gradient of MLE, the gradient of formula 12, which is formula 13, seems to be smoother. This appears to make the model's output smoother rather than closer to one-hot. Doesn't this contradict the starting point of Convex Loss?
2. Is it possible to propose a quantitative metric to measure the impact of these two aspects? Based on this metric, we might be able to directly obtain an optimal solution, instead of introducing an adjustable hyperparameter like in formula 12. Because as long as an adjustable hyperparameter is introduced, the experimental results are likely to improve to a certain extent, which is a very commonplace feature and seems to waste the previous detailed theoretical analysis.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: I couldn't find any discussion about limitations in the main text and the appendix of the paper. Perhaps the authors should include it in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. We provide discussions and explanations about your concerns as follows.
> Some key experimental hyperparameters were not provided, such as the value of T, which I could not find in the main text or the appendix; The notation is a bit confusing (for example, k is used in both formula 11 and theorem 3, but the meanings are not the same);
We apologize for the confusion caused by our presentation. T represents the length of target sentence. In the revised version of the paper, we will ensure that all key experimental hyperparameters are adequately presented and explained. To avoid any ambiguity, we will change the notation of "k" to "m" in Theorem 3 and Theorem 4.
We appreciate your attention to detail and valuable feedback, which helps us improve the clarity of our paper.
> Compared to the gradient of MLE, the gradient of formula 12, which is formula 13, seems to be smoother. This appears to make the model's output smoother rather than closer to one-hot. Doesn't this contradict the starting point of Convex Loss?
Thank you for your question regarding the gradient of our loss. We understand your concern and would like to clarify the misunderstanding.
The gradient of formula 12, $fg(p(x))$, is $f'g(p(x)) \cdot g'(p(x))$ (line 242), where formula 13 represents only a part of the gradient. Compared to the gradient of MLE, $g'(p(x))$, our gradient has an additional term, $f'g(p(x))$ (formula 13), which can be understood as the weight for the MLE loss. With $f$ being a convex function, $f'$ is an increasing function, so $f'g(p(x))$ will be larger for samples with greater likelihood. This encourages the model to assign higher probability to the most probable outputs, which aligns with the starting point of Convex Loss.
We hope this clarification resolves your concern. If you have any further questions, please feel free to bring them up for discussion.
> Since the goal of the Convex Loss is one-hot rather than learning distribution, it seems that it can only be used to improve the greedy/beam search, and is not applicable to the decoding strategy of random sampling.
Thank you for your comment regarding the applicability of our approach. You are correct that our method primarily focuses on closed-ended generation tasks, such as machine translation and text summarization, where convex loss can help greedy/beam decoding find high-probability outputs. While our approach ensures a sharper optimal distribution, it currently cannot accurately control the shape of the distribution, which limits its applicability to open-ended generation tasks.
Nevertheless, we believe that further research in this direction could lead to more precise control over the shape of the optimal distribution. This could potentially improve commonly used sampling methods (e.g., top-k sampling, nucleus sampling, temperature sampling) in open-ended generation by enabling more controllable and effective generation of desired results.
> Is it possible to propose a quantitative metric to measure the impact of these two aspects? Based on this metric, we might be able to directly obtain an optimal solution, instead of introducing an adjustable hyperparameter like in formula 12. Because as long as an adjustable hyperparameter is introduced, the experimental results are likely to improve to a certain extent, which is a very commonplace feature and seems to waste the previous detailed theoretical analysis.
Thank you for your very insightful question regarding the possibility of proposing a quantitative metric to measure the impact of the two aspects you mentioned, which are indeed our major principles of loss designing. Your question highlights the subtle trade-off between guiding the model to learn the desirable one-hot distribution and ensuring that the loss is not difficult to minimize.
Finding a quantitative metric to balance these two aspects and obtain an optimal solution would indeed be a very valuable contribution to the field. However, due to the complexity and challenges associated with this task, we are unable to provide a solution within a short period of time. We consider this an important direction for future work and will explore this idea in our ongoing research. | Summary: Authors have proposed the novel way set of loss functions to learn neural sequence model in both AR and NAR factorization. The core idea is to sharpen the model distribution so that is comes closer to the one-hot target distributions which are typically used for closed-form generation task such as NMT. They have theoretically confirmed how the usage of composition of convex and concave functions with model's probability will result in the aforementioned sharper model's induced distribution.
They performed set of experiments in both AR and NAR setting. AR settings shown that the proposed approach narrows down the gap between greedy and beam search for NMT and slightly improve scores on summarization task. NAR results shows improvements in translation quality.
Strengths: * The proposed apprach is very intuitive and makes sense from the optimization point of view and given the motivation of getting a sharper distribution. It is also easy to implement.
* The proposed framework of convex/concave composition allows community to extend this by examining other potential candidates for the loss function.
* Assuming that the NAR NMT baseline results are strong baselines, the proposed approach substantially improves the NAR prediction quality.
Weaknesses: # Experimental analysis
* The resulting BLEU scores for AR setting are too close and not convincing. I think it is required to perform multiple random seed training or other way of estimating the translation performance variance to verify the improvement. In its current form I would be more convinced that AR modeling performance is nearly the same in terms of translation quality.
# Related work
There is a recent work (I am not an author!) around modeling sharper distributions by reformulating the sequential random variables to be one-hot: https://arxiv.org/abs/2205.00704. I believe it is very similar in terms of the underlying motivation and observed improvements. Given that this was not mentioned in this work, I am not sure how well other related work is covered in this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: # tuning of k in AR vs NAR
* you said you chose k = 1 for AR and tuned K for NAR experiments. And you only shown the ablation with k for NAR task. Why is so? I suspect there are some interesting pieces about k in AR setting. Could you share any insights? Also I believe it is crucial to tell that you tuned k for NAR experiments as it decreases the practicality of this method in real life.
* from training details I see that you didnt use label smoothing for NAR modeling with convex objective. Why is so? I think that the answer "it works better empirically" is not good enough here as this method inherently related to making sharper distributions while smoothing is doing the opposite. So I tend to believe there is some interesting relationship there.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors did not discuss limitations in the main text of their submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. We provide discussions and explanations about your concerns as follows.
> The resulting BLEU scores for AR setting are too close and not convincing. I think it is required to perform multiple random seed training or other way of estimating the translation performance variance to verify the improvement. In its current form I would be more convinced that AR modeling performance is nearly the same in terms of translation quality.
Thank you for your feedback regarding the evaluation on AR setting. Following your suggestion, we plan to perform multiple random seed training and report the results in the revised version of the paper. We would also like to emphasize that the current version of our paper has already demonstrated the positive effect of our method on autoregressive models:
1. We conducted AR experiments on both machine translation and text summarization tasks. While the performance is close in the machine translation with beam search setting, our method shows improvements in other settings.
2. Figure 2 demonstrates a consistent improvement of our method across different beam sizes. The gap between our method and the baseline narrows as the beam size increases, which complies with our theoretical analysis and suggests that the convex loss has a positive impact on translation quality.
3. In the attached response PDF, we provide the AR performance under different values of k. We observe consistent improvements when k is close to 1, especially for greedy search.
We hope this response addresses your concern and demonstrates the effectiveness of our proposed method. If you have any further questions or concerns, please feel free to bring them up for discussion.
> There is a recent work (I am not an author!) around modeling sharper distributions by reformulating the sequential random variables to be one-hot: https://arxiv.org/abs/2205.00704. I believe it is very similar in terms of the underlying motivation and observed improvements. Given that this was not mentioned in this work, I am not sure how well other related work is covered in this paper.
We appreciate your reference to the recent work (https://arxiv.org/abs/2205.00704) that shares a similar motivation with our paper in terms of modeling sharper distributions. We apologize for not mentioning this work in our paper and would like to express our gratitude for bringing it to our attention.
Upon careful analysis of the method used in this paper, we found that it indeed shares a similar motivation with our work. However, the proposed method in the mentioned paper differs significantly from ours. The method in the paper reformulates the distribution of the model at the word level, which may not directly facilitate the model in finding high probability sentences at the sentence level. In contrast, our method directly trains the model to focus on highly probable sentences.
Nonetheless, we believe that the mentioned paper is an interesting and relevant work. In the revised version of our paper, we will include a discussion of this related work and highlight the differences between our method and the one proposed in the mentioned paper. Thank you once again for your valuable feedback.
> you said you chose k = 1 for AR and tuned K for NAR experiments. And you only shown the ablation with k for NAR task. Why is so? I suspect there are some interesting pieces about k in AR setting. Could you share any insights? Also I believe it is crucial to tell that you tuned k for NAR experiments as it decreases the practicality of this method in real life.
Thank you for your question regarding the choice of k. The reason we did not show the ablation study with k for the AR setting is actually the space constraints. In the attached response PDF, we supplement the results for different values of k in the AR setting. Our results show that k=1 performs the best, while other choices of k, such as k=0.5 or 0.75, also bring improvements, especially in the greedy search setting.
For the NAR experiments, we tuned k from the set {1, 2, 3, 5, 8} on the validation set. We also provide the test set results for different values of k in Figure 2. In the revised version of the paper, we will include the results of different k in the AR setting to ensure a clearer presentation.
> from training details I see that you didnt use label smoothing for NAR modeling with convex objective. Why is so? I think that the answer "it works better empirically" is not good enough here as this method inherently related to making sharper distributions while smoothing is doing the opposite. So I tend to believe there is some interesting relationship there.
Thank you for your insightful question regarding the relationship between label smoothing and convex objective. We have provided the experimental results in our attached response PDF, showing that autoregressive models perform better with label smoothing, while non-autoregressive models work better without it. As you mentioned, this experimental phenomenon is indeed interesting. Our interpretation is as follows:
Label smoothing serves as a regularization technique, helping the model generalize better on test data. Although we aim for a sharp distribution in autoregressive models to facilitate approximate search algorithms in identifying the most likely output, the importance of regularization may outweigh the need for a sharp distribution, making label smoothing beneficial.
However, for non-autoregressive models, they are theoretically unable to fit the data distribution, let alone overfit the dataset. In this case, finding a sharp distribution for NAR models to fit becomes more important than preventing overfitting with label smoothing. As a result, NAR models prefer not to use label smoothing.
---
Rebuttal Comment 1.1:
Title: thank you
Comment: thanks for your effort in doing extra experiments and providing more details to address my concerns. My concerns are addressed now and i am going to increase my score! | Summary: The objective of the authors in this paper is to develop a machine translation model that doesn't need to learn the complete output distribution given the input, but instead focuses on highly probable outputs based on p_data. To achieve this, the authors suggest replacing the logarithm in Maximum Likelihood Estimation (MLE) with a convex increasing function. They demonstrate that optimizing such a function leads to a desirable one-hot distribution. However, the authors acknowledge that training such a model can be challenging due to the gradient being proportional to the probability of the samples, which can be quite low for sequences.
To address this issue, the authors propose combining a concave function (e.g., log in MLE) with an increasing convex function. They illustrate that optimizing this composite function yields a distribution that assigns higher probability to outputs with greater likelihood under p_data.
The authors apply their loss function to train both autoregressive and non-autoregressive machine translation models. They demonstrate that the resulting greedy and beam outputs from these models surpass the performance of the greedy and beam outputs produced by MLE.
Update: I have read the author's response. I am satisfied with the response and since I had already given an accept to the paper, I will keep my ranking.
Strengths: The authors' approach to addressing the challenge of learning a peaky distribution is innovative. The novel convex-composition loss functions introduced in this paper are particularly interesting within the realm of sequence generation.
The paper is highly compelling and offers an intuitive reading experience. The authors' strategy of incorporating an increasing convex function to emphasize high-probability outputs under p_data is remarkable.
The theoretical concepts discussed in this paper hold immense potential impact, even if the direct application to machine translation does not yield substantial results.
Weaknesses: Equation (5) may not hold true for non-autoregressive models as these models employ latent variables that interconnect the output tokens, as mentioned in [1] [2].
The term "Convex loss functions" in section 3.2 is misleading since the negative logarithm used in MLE is, in fact, a convex loss function. The authors actually refer to "Loss functions with convex f."
There is an inaccuracy in line 200 where the authors state that the standard loss function in maximum likelihood estimation is the log-probability. This is incorrect. The loss function is actually the negative log-probability, which is convex and not concave.
Equation (12) can only utilize odd values for the power function, as for even values, the function decreases within the range of g(p_theta(x)).
[1] Gu, J., et al. "Non-autoregressive neural machine translation." International Conference on Learning Representations (ICLR). 2018.
[2] Gu, Jiatao, and Xiang Kong. "Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade." Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. 2021.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In line 149, the statement "If Lf has multiple optimal distributions, we simply arrange them in alphabetical order and take the first one to be pf" raises concerns. It is not feasible to arrange distributions in alphabetical order since the space of distributions is not countable, even if the sample space is countable.
Regarding step number 3 in equation (18) of the Appendix, it is unclear how the optimality of pg implies its validity. Further clarification is needed to establish the reasoning behind this step and its connection to the optimality of pg.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have discussed that convex functions of probability are hard to train. It will be good to have a similar discussion about the composition loss functions discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. We provide discussions and explanations about your concerns as follows.
> Equation (5) may not hold true for non-autoregressive models as these models employ latent variables that interconnect the output tokens.
Thank you for your comment regarding Equation (5) and its applicability. We acknowledge that the equation may not hold true for latent models, as they employ latent variables that interconnect the output tokens. In section 2.2, we focus on introducing the background knowledge about text generation models, which is why we used the original form in Equation (5). We appreciate your feedback and will clarify this point in the revised version of the paper to avoid any confusion.
> The term "Convex loss functions" in section 3.2 is misleading since the negative logarithm used in MLE is, in fact, a convex loss function. The authors actually refer to "Loss functions with convex f". There is an inaccuracy in line 200 where the authors state that the standard loss function in maximum likelihood estimation is the log-probability. This is incorrect. The loss function is actually the negative log-probability, which is convex and not concave.
Thank you for pointing out this issue. In the revised version of the paper, we will avoid using the term "Convex loss functions" and instead refer to them as "Loss functions with convex f" to prevent any confusion. Additionally, we will correct the statement in line 200 to "The standard loss function in maximum likelihood estimation is the negative log-probability, where log-probability is a concave function."
> Equation (12) can only utilize odd values for the power function, as for even values, the function decreases within the range of g(p_\theta(x)).
Thank you for pointing out the issue with the power function in Equation (12). We acknowledge that there was a writing mistake in our paper, which has been fixed in Appendix D. As you correctly noted, the power function $f(x)=x^k$ is not suitable for our purpose due to its behavior within the range of $\log(p)$. For even values of k, the function decreases, and for odd values of k, it is concave.
The correct form should be $f(x)=-(-x)^k$, with $0 < k < 1$, which is increasing and convex when $x < 0$. The corrected form and its experiment results are presented in Appendix D.
We apologize for the confusion caused by this mistake and appreciate your attention to detail. We hope this clarification resolves your concern.
> In line 149, the statement "If Lf has multiple optimal distributions, we simply arrange them in alphabetical order and take the first one to be pf" raises concerns. It is not feasible to arrange distributions in alphabetical order since the space of distributions is not countable, even if the sample space is countable.
We appreciate your attention to detail and agree that our original statement was not appropriate. We will update the statement as follows: "If Lf has multiple optimal distributions, we use pf to denote an arbitrary optimal distribution. This choice does not harm the generality of our analysis, as the subsequent discussion is applicable to all optimal distributions."
> Regarding step number 3 in equation (18) of the Appendix, it is unclear how the optimality of pg implies its validity. Further clarification is needed to establish the reasoning behind this step and its connection to the optimality of p_g.
Thank you for your valuable question. We apologize for any confusion caused by our presentation, as we skipped too many steps in the derivation of Equation (18).
We can prove Equation (18) by contradiction, which is simialr to the proof of Theorem 2. If Equation (18) does not hold, then we can further reduce the loss, which contradicts the optimality of $p_g$. Specifically, we can let $p_g'(x_i)=p_g(x_i)-\alpha$, and $p_g'(x_j)=p_g(x_j)+\alpha$. The gradient of loss with respect to $\alpha=0$ is $p_{data}(x_i) \cdot g'(p_{g}(x_i)) - p_{data}(x_j) \cdot g'(p_{g}(x_j)) < 0$, so we can further reduce the loss with a positive $\alpha$, proving Equation (18) by contradiction.
We hope this clarification resolves your concern. We will update the appendix in the revised version of the paper to include the missing steps and ensure a clearer presentation of the proof. | Summary: This paper proposed a new learning objective for language modeling, which is extended from Maximum likelihood estimation (MLE). More specifically, the author replaced the log function in log-likelihood with an arbitrary increasing function f. Then the author analyzed the properties of the loss under scenarios where f is convex and its relaxation. Besides, the author provides two practically feasible function forms of f for the learning objectives. Experiments with exponential function show empirical improvement of transformers on translation and summarization tasks.
Strengths: 1. The paper proposed a variant of MLE loss for text generation, which is novel.
2. The assumptions for theoretical analysis are reasonable and commonly appeared in real-world application scenarios.
3. The paper is clearly stated and easy to follow.
Weaknesses: 1. About Baselines: The method should have been evaluated on more baselines. For AR model, the author only compared the proposed method with the vanilla transformer. For NAR, only two baselines (CMLM, CTC) are compared. The method needs to be applied on more powerful baselines (e.g. [1]) to show its effectiveness and universality.
2. About the f function: The author proposed two practical forms for the f function. However, in the experiments, only the exponential form is discussed.
3. About the theoretical analysis: The analyses seem trivial, and provided little instruction on the practical design of the loss function, or to guarantee the effectiveness.
Reference:
[1]. Bridging the Gap between Training and Inference for Neural Machine Translation, ACL 2019
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. In equation 10, is x_i a sequence or a token? In my understanding, x_i represents a sequence. If x_i is a sequence, the difference between the proposed method and MLE is adding a multiplier to the gradient, as shown in equation 9?
2. What are the empirical benefits of the theoretical analyses in the paper for further applications?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The author didn't mention any societal impact of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. We provide discussions and explanations about your concerns as follows.
> About Baselines: The method should have been evaluated on more baselines. For AR model, the author only compared the proposed method with the vanilla transformer. For NAR, only two baselines (CMLM, CTC) are compared. The method needs to be applied on more powerful baselines (e.g. [1]) to show its effectiveness and universality.
We appreciate your suggestion to evaluate our method on more baselines. In Table 2 of the attached PDF document, we present the outcomes of implementing the proposed convex loss on data generated via sequence-level distillation. Upon applying the convex loss on the distilled training data, we observed a significant improvement in the NAR model, while the effect seems minor on the AR model. Particularly, NAR models benefit from convex loss by acquiring a more concentrated distribution to address their inherent expressive limitations. The simplification of the data distribution, although beneficial, might not suffice for NAR models to achieve this objective. The introduction of our proposed convex loss serves to further facilitate this desired outcome.
Moreover, we would like to emphasize that our research is centered around the development of a theory-grounded algorithm to learn a concentrated distribution from a multi-modal data distribution. Therefore, we did not compare our methods with other training-augmented techniques, such as oracle-based schedule sampling in [1].
[1]. Bridging the Gap between Training and Inference for Neural Machine Translation, ACL 2019
> About the f function: The author proposed two practical forms for the f function. However, in the experiments, only the exponential form is discussed.
Thank you for your comment regarding the f function. We apologize for any confusion caused by the presentation. In our paper, we indeed explored both forms of the f function. However, due to space constraints, we included the results and discussion for the power form in the supplementary materials.
We found that the power form of the f function encountered some difficulties during training, leading to worse performance compared to the exponential form. In the revised version of the paper, we will make it clearer that the results for both forms of the f function are provided and ensure that the supplementary materials are more easily accessible for readers interested in the details of the power form.
> About the theoretical analysis: The analyses seem trivial, and provided little instruction on the practical design of the loss function, or to guarantee the effectiveness. empirical benefits of the theoretical analyses
Thank you for your feedback regarding the theoretical analysis in our paper. We would like to emphasize that our analysis is non-trivial and contributes novel insights to the field. Our work presents four theorems accompanied by detailed proofs, which were previously unknown in the field.
Our theoretical analysis provides valuable practical guidance for the design of loss functions. It demonstrates the effectiveness of convex functions in learning a one-hot distribution, while also highlighting the limitations of directly optimizing convex functions due to gradient-related challenges. This understanding informed the development of our composite loss function, a key step of our work.
Furthermore, our theoretical analysis guarantees the effectiveness of the composite loss function by showing that it yields a distribution that assigns higher probability to outputs with greater likelihood under p_data. This property is particularly desirable for closed-ended text generation tasks, where generating the most appropriate response is the primary goal.
We hope that this response clarifies the importance and contributions of our theoretical analysis. We appreciate your feedback and will continue refining our paper to make our contribution clear.
> is x_i a sequence or a token? the difference between the proposed method and MLE is adding a multiplier to the gradient, as shown in equation 9?
Thank you for your question regarding Equation 10. We would like to confirm that x_i indeed represents a sequence, not a token. Your understanding of the difference between the proposed method and MLE is correct. In our method, we add a multiplier to the gradient, which can be interpreted as a weight for the loss.
As discussed in lines 242-247 and shown in Equation 13, the gradient of the convex-composition function is f'g·g'. The additional term f'g serves as a weight for the loss. Given that f is a convex function and g is an increasing function, the weight f'g is larger for more probable samples, thereby directing the model's focus towards generating outputs with high probabilities. | Rebuttal 1:
Rebuttal: We thank reviewers for their valuable feedback and insightful comments on our paper. In response to the concerns and recommendations, we have conducted additional experiments and provided supplementary results in the attached response PDF. In Table 1, we present the results of tuning the hyper-parameter k for autoregressive models. In Table 2, we show the results of models trained with and without knowledge distillation. In Table 3, we report the outcomes of combining our method with text VAEs. In Table 4, we provide the results of models trained with and without label smoothing. Additionally, in Figure 1, we illustrate the performance of autoregressive models trained with and without knowledge distillation across different beam sizes.
Pdf: /pdf/3cc30f6f7e8ef07176cff1d0cca679cb7630fd2c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper discusses the potential limitations of likelihood/KL-guided training for natural language generators from a measure theoretical perspective. It analyses the discrepancy between maximizing the likelihood (thus recall-friendly) and producing high-quality samples (thus precision-oriented) in autoregressive and non-autoregressive models, and proposes an alternative to MLE as the concrete solution. Experiments on enhancing autoregressive/non-autoregressive language models on machine translation validate the effectiveness of the proposed objective.
Strengths: 1. The theoretical analysis is insightful, solid, and well-presented. It makes a lot of sense and the corresponding experiment results comply with the analysis, especially the ones on non-autoregressive models.
2. The measure theoretical problem it studies has been a critical yet long-neglected one since the falling short of language GANs. This paper is a solid step towards the ultimate solution and I would really hope the publication of this paper can bring the community's focus back to this area.
Weaknesses: 1. It's less analyzed in this paper about the proposed objective on some potential (important and necessary) scenarios. For example, for text VAE/other types of latent-variable text generators, it's long been a dilemma to improve the generation quality while preventing the latent collapse problem.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. As stated in Weaknesses, is it possible to include additional experiments on diverse text generation with VAEs? You can try interpolation and I'm very curious if the proposed approach is eventually capable of mitigating the latent collapse of text VAEs.
2. I would appreciate it if you can explicitly compare against models under setups with/without sample-based distillation from teacher models. As far as I'm concerned the dynamics of distillation (especially in machine translation) would be somehow similar to the proposed approach, I'd like to see if it is possible to achieve the performance of a model w/ distillation by simply replacing the objective with the proposed one.
Would be more than happy to raise my scores if these two aspects are addressed in the revision period.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I don't observe any potential societal limitations of this work. This work is more of an algorithmic/theoretical contribution to the community.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. We provide discussions and explanations about your concerns as follows.
> Include additional experiments on diverse text generation with VAEs. You can try interpolation and I'm very curious if the proposed approach is eventually capable of mitigating the latent collapse of text VAEs
Following your advice, we study the effects of convex functions on VAE-text models by replacing the log-probability-based reconstruction loss in the ELBO with the convex loss. Formally, we train the model using the following loss:
$\mathbb{E}_{z\sim q(z|x)} -f(\frac{1}{T}\log p(x|z)) + \mathrm{KL}(q(z|x)||p(z))$.
Here, we opted for the convex function $f$ to be $\exp(x)$, as outlined in our paper.
We conduct the experiements in a conditional generation scenario, using a VAE-based non-autoregressive model [1,2] applied to a machine translation task. During generation, we randomly sampled the latent variable using 3 different seeds to obtain the texts. We assess the generation quality with the BLEU score computed against reference (reference-BLEU) and measured the diversity with the BLEU score computed against each other (pairwise-BLEU). The average value and standard derivation are reported in the attached response PDF.
During the training process, we observed that the KL divergence tends to vanish more readily when the convex functions are applied. We attribute this phenomenon to the smaller norms of gradients associated with the convex-composition loss. As a result, the gradient of the KL divergence dominates the model update, leading to the vanishing KL divergence. We noted VAE-text models trained using the convex-composition loss exhibit a higher generation quality while suffering from poor diversity, which is consistent with the mode collapse property of convex loss.
[1] Shu, Raphael, et al. "Latent-variable non-autoregressive neural machine translation with deterministic inference using a delta posterior." Proceedings of AAAI 2020.
[2] Gu, Jiatao, and Xiang Kong. "Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade." Findings of ACL-IJCNLP 2021.
>Compare against models under setups with/without sample-based distillation from teacher models. As far as I'm concerned the dynamics of distillation (especially in machine translation) would be somehow similar to the proposed approach, I'd like to see if it is possible to achieve the performance of a model w/ distillation by simply replacing the objective with the proposed one.
Thank you for your insightful comments. We agree with you that the dynamics of distillation is similar to our approach. In sequence-level knowledge distillation, the student model is encouraged to imitate the output of the teacher model, which has a less diverse training dataset and thereby tends to predict a sharper distribution. This characteristic is similar to the effect of convex loss.
To compare the two methods, we used the autoregressive Transformer as the teacher and applied sequence-level knowledge distillation to train our models. The experiment results in the attached response PDF (Table 3, Figure 1) show that convex loss and knowledge distillation have similar effects on text generation models. Both methods lead to significant improvements in non-autoregressive models and bridge the performance gap between greedy and beam search of autoregressive models. It is worth noting that our method does not require training an additional teacher model and decoding the training set to achieve these improvements. Moreover, convex loss can be combined with knowledge distillation to further enhance the performance.
If you have any further thoughts, please feel free to bring them up, and we will be more than happy to engage in an in-depth discussion.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I appreciate the efforts from the authors. I'm satisfied about the second-round feedbacks from the authors and I'm ready to raise my score. | null | null | null | null | null | null |
Survival Instinct in Offline Reinforcement Learning | Accept (spotlight) | Summary: The authors make an interesting and important observation about offline Reinforcement Learning. They note that an agent can learn from offline data even when the reward signal is not the same one as that used to train the online agent. Moreover it can be vastly different, or even with no reward signal at all, within certain types of task and with certain datasets. In particular, when a dataset has limited and biased data coverage, the aim of an offline agent to stay within the data distribution means that it has an equivalent of a survival instinct. This point is noted first empirically and then proven formally as well as being shown in an extensive set off environments and settings.
The conditions necessary of the training data for an agent to learn a near-optimal policy are identified. Finally, a perspective on offline RL is given which means that the reward signal can be essentially ignored, so long as the appropriately biased dataset is provided.
Strengths: The originality and quality of the paper are both very strong. The combination of formal mathematical proofs of the insensitivity to the rewards and the experiments which show this to be true are married well together. The paper as a whole is very well written, and the appendix is in particular very thorough and shows a good understanding of the need for transparency and clarity in experimental settings. The amount of formality within the bulk of the paper is also appropriate, giving enough to understand the results, but not to swamp them.
The nature of the findings are themselves original and are likely to have impact in the field of offline RL.
Weaknesses: I see few weaknesses overall, although there are a few typos throughout:
In most of the figures, the number of seeds is provided, though in figure 1 it is not. This should be added.
Line 120: "worst case a necessary"
Line 143: "In following"
Line 178: "s, a" may be clearer as (s,a).
Line 214: "This is contrast"
Line 227: "Positive data bias assumption"
Footnote on page 5 has an additional bracket at the end.
Line 232: "benefits"->"benefit"
Line 245: "states have a safe action"
Line 247: "is also mild"
Footnote on page 6: "we include a ablation"
Line 290: "with wrong can"
Line 295 needs to be rewritten.
Line 310: "with provably pessimism"
Line 315: "A implicit"
Line 318: "due the"
Line 332: "the end of episode"
Line 334: "there is no timeouts"
Line 915 in the appendix: "the followings are true"
Line 1025: "first we the"
Line 1089: "and let the goal state to be"
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I think that a discussion under which circumstances an adversarial reward signal could be designed to attack an offline agent may be interesting. While it is clear that with the appropriately biased data, a random reward signal has little effect, there may be circumstances where a strong enough adversarial reward signal and an inappropriately biased dataset may not be sufficient to lead to near optimal performance.
Is it possible to make claims in the multi-agent offline setting similar to those in the single agent setting?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations of the work are given, but none of the potential societal impact of this work has been discussed. Given that Offline RL may well play a large role in ML systems going forward, this should be mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are happy that the reviewer overall feels positive about our paper. We want to thank the reviewer for reviewing our paper and pointing out typos in detail. We will fix these typos in the final version.
**Adversarial Reward**
Regarding the question on adversarial reward, we agree with the reviewer that when a dataset does not have positive bias, offline RL may suffer from bad data reward. Indeed, in our experiments on the halfcheetah datasets, we did observe that the wrong data reward can make offline RL algorithms produce bad policies (see Fig. 3). From Definition 2, *theoretically*, an admissible (see Definition 4) offline RL learning a positively biased data can guard against any adversarial reward within the reward class $\tilde{\mathcal{R}}$ . However, we currently don’t have way to *empirically* verify whether a deep offline RL implementation is admissible, or quantify the amount of positive bias of a data distribution, like the size of reward class $\tilde{\mathcal{R}}$. Currently, our suggestion is to actually run the offline RL algorithm with wrong data rewards and see whether the learned behavior differs from that given by learning with the true reward. One interesting future direction is to investigate whether/how offline RL algorithms can be robust to adversarial reward that changes during training (Definition 2 only concerns a fixed reward).
**Multi-agent Offline RL**
As the reviewer also pointed out, exploring whether multi-agent offline RL algorithms have a similar survival instinct is another interesting future direction. We will discuss these future research directions in the limitations in the final version of the manuscript.
**Societal Impact**
Lastly, we agree with the reviewer on the growing societal impact of offline RL going forward. We think the discovery here might shed light on a potential positive impact of offline RL. By survival instinct, offline RL may be able to train sequential-decision policies that have good behaviors without needing to collect negative data. This is different from the common belief that RL can learn to behave well only if it has seen both positive (success) and negative (failure) data. This ability to learn from one-sided data is important especially for applications where collecting negative data is unethical, e.g., learning not to produce hateful speech by first collecting hateful speech, learning not to harm patients in medical applications by first harming some, etc.
On the flip side, also by survival instinct, offline RL can be prone to existing societal (e.g., gender, racial) biases in data, and, moreover, such a bias cannot be easily corrected by relabeling data with a different reward. As a result, when using an offline RL algorithm, more strategic thinking on data collection might be needed. We encourage researchers and practitioners to collect datasets using methodology such as those proposed in [1] and provide details such as how data was collected and cleaned so that users can assess whether it is appropriate to train offline RL algorithms with these datasets.
[1] Gebru, Timnit, et al. “Datasheets for datasets.” Communications of the ACM 2021.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: I am happy with these comments and will keep my initial rating for this paper. | Summary: This work displays a new observation that offline reinforcement learning (RL) algorithms can develop efficient policies, even when trained with incorrect reward labels. The authors attribute this resilience to the pessimistic nature of offline RL algorithms and the inherent biases in data collection processes. The findings introduce a ‘survival instinct’ in the systems that inform how we understand offline RL benchmarks and create future ones. The study, therefore, recommends a new strategy for offline RL that promotes learning of desired behavior through flawed rewards but intentional bias in data coverage.
Strengths: 1. A good warm-up experiment to help readers understand the motivation.
2. Sound theoretical analysis to show the property of ‘survival instinct’ in current offline RL algorithms.
3. Comprehensive experiments on current D4RL and meta-world benchmarks with different offline RL algorithms.
Weaknesses: 1. There exists a gap between the intuition (longer trajectories in the data have a smaller optimality gap) and the definition of the Positive Data Bias. On the other hand, Positive Data Bias does not have a quantitative value for each dataset so that the reader can clearly see the difference between the different datasets.
2. Authors claim that offline RL is intrinsically safe. However, missing relevant experiments to validate their claims.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 3. Regarding the intuition about the positive data bias, I am pretty interested in the experiments in the Antmaze dataset from D4RL, where it is a multi-goal task and there does not exist positive data bias claimed by the authors.
4. A detailed analysis of the influence of the three types of “wrong” reward labels can make this work more complete and robust.
5. More experiments about safe RL are highly appreciated [1].
6. In line 247 about the safe RL. "This condition can be easily satisfied by filtering out unsafe states in post processing." From the previous work [2], one biggest reasons why we prefer offline RL over BC is that offline RL can learn that those states are dangerous. What does the author think about this?
[1] Liu Z, Guo Z, Lin H, et al. Datasets and Benchmarks for Offline Safe Reinforcement Learning[J]. arXiv preprint arXiv:2306.09303, 2023.
[2] Kumar A, Hong J, Singh A, et al. When should we prefer offline reinforcement learning over behavioral cloning?[J]. arXiv preprint arXiv:2204.05618, 2022.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: see the above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Quantifying Positive Bias**
We would like to clarify that positive bias is a general concept of when approximate optimality to the CMDP implies approximate optimality w.r.t the true reward, while *"longer trajectories in the data have a smaller optimality gap"* refers to a special form of positive data bias called *length bias*. In Prop. 3 (Appendix C.4), we provide a few examples of positive data bias, including length bias.
We argue that the performance gap between running an offline RL algorithm with true reward and other data rewards can be used to measure positive data bias (like our experiments). An interesting future direction is to quantify the positive bias in a dataset without having to run offline RL algorithms. We will discuss this in the limitations.
**Inherent Safety**
Our results on episode lengths for D4RL datasets in Fig. 1 (b) and Appendix B.4 are evident that offline RL is inherently safe. For hopper and walker, the cost can be defined by whether the agent falls, and an episode stops when the agent incurs a non-zero cost, i.e., an agent is the safest if it does not fall. In Fig. 7, we show that offline RL algorithms, e.g., ATAC and PSPI, can achieve long episode lengths even when the behavior policy is unsafe. We conduct new experiments on offline safety gymnasium [1] below, as the reviewer suggested.
**Antmaze**
As discussed in Section 4 (line 260), we acknowledge not all datasets have positive bias. With that said, AntMaze datasets technically have a positive bias, albeit a weak one. Although AntMaze datasets are collected by multi-goal policies, the terminal flags are *re-labeled based on a single test-time goal*. In other words, trajectories reaching the test-time goal are marked as terminal (which mathematically has an infinite length) while others have a bounded length of at most 1001. We refer the reviewer to Appendices D.2 and D.3 where we formally discuss how this creates a positive data bias.
However, such a length bias is hard to be picked up by a deep offline RL implementation, because failed trajectories, which can be very long, would have similar *discounted* returns as the successful ones. In fact, it has been proved that, for AntMaze medium and large datasets, good performance can only be achieved when using *an algorithm-specific transformed reward*: IQL requires one to be subtracted from the rewards and CQL requires the reward to be shifted and scaled to [-5, 5] [2]. During our earlier experiments, we found the results on the AntMaze inconclusive, as it is hard to separate optimization difficulty from the effect of different data rewards.
**Analysis on Wrong Rewards**
Empirically, we observe that, for ATAC and PSPI, the learned policies from zero and random rewards are similar, as the random reward is in expectation a constant. In more diverse datasets, such as the medium-replay datasets, we often find policies learned with the negative reward worse than those learned using zero and random reward.
On the theoretical side, Definition 2 allows a dataset to have positive bias w.r.t. one of the wrong rewards, but not with the others. In Prop. 3 (Appendix C), we provide several factors (reward shaping reward, expert demonstrations, and length bias) that can lead to positive data bias.
**Extra Experiments on Offline Safety RL**
We conduct experiments on offline safety gymnasium [1] and include the results in the attached PDF. We make a few remarks which are important to interpreting the results.
1. Our notion of inherent safety is different from [1]. Ours (Corollary 2) focus on offline RL’s ability to produce safe policies given a dataset that *only* contains safe states, while [1] focuses on learning the concept of safety from a dataset which contains *both* safe and unsafe behaviors. Our notion of inherent safety can be more favorable when it is inadmissible/unethical to collect unsafe data.
2. For this reason, the datasets in [1] contain unsafe states. We use a naïve filtering strategy, which removes all transitions with non-zero costs. This ensures our data only covers safe states, but it also means that we use less data than the comparators, algorithms in [1]. Better results may be possible with a more sophisticated filtering strategy.
3. Many algorithms from [1] use a cost target, which sets the maximum total cost allowed by safety. [1] uses three different cost targets, {20, 40, 80}, and reports the normalized total reward and cost for each algorithm averaged over the three targets. By contrast, standard offline RL algorithms do not use such a cost target. We report results of offline RL algorithms according to an “effective cost target” of 34.29, which is similar to how BC_All results are presented in [1].
We observe that *offline RL with a naïve data filtering strategy achieves comparable performance as the per-task best performing offline safe RL algorithm*. Our agents in general incur low cost except for in the circle tasks. We hope these new results can give the reviewer more confidence on the safety property of offline RL.
**Filtering Unsafe States**
Filtering often naturally happens during data collection, e.g., a robot is likely to be stopped when it’s about to crash. In this case, we have a dataset with only safe states, and some of the trajectories are incomplete due to intervention. By Corollary 2, offline RL can learn a safe policy on such a dataset, *provided that a safe in-support policy exists*. This is because due to survival instinct, offline RL algorithms would 1) assign a low value to such an unsafe state and 2) propagate this low value to states and actions encountered earlier. BC, however, may follow an unsafe trajectory and enter an unsafe state. This is demonstrated in the D4RL episode length results in Appendix B.4.
[1] Liu et al. Datasets and Benchmarks for Offline Safe Reinforcement Learning. 2023
[2] Tarasov et al. CORL: Research-oriented deep offline reinforcement learning library. 2022
---
Rebuttal Comment 1.1:
Title: Official Reply from Reviewer k1TU
Comment: I appreciate the authors' clarifications, and most of my concerns have been addressed. At the same time, I am impressed by the experiments in [1] which show that offline RL methods with naïve data filtering achieve comparable performance to state-of-the-art offline safe RL algorithms. I believe this work and the d4rl Mujoco dataset will attract the attention of the offline RL community. The previous work [2] has mentioned the phenomenon of learning with incorrect reward signals, but the strength of this work lies in its exhaustive experimentation and analysis of this phenomenon. I will maintain my rating, and I suggest citing the work of Shin D et al. [2] in this paper.
[2] Shin D, Dragan A D, Brown D S. Benchmarks and algorithms for offline preference-based reward learning[J]. arXiv preprint arXiv:2301.01392, 2023.
---
Reply to Comment 1.1.1:
Comment: We also found this paper [2] very recently. We will cite it in the final version. | Summary: This paper reports a very interesting new phenomenon that would be interesting to the offline RL community. It demonstrates that even when trajectories have the wrong reward labels, offline RL can learn good policies. The paper's experiments attempt to dissect the reasons for why this surprising phenomenon emerges, and argues that special to offline RL algorithms, pessimism endows the agent with survival instinct to stay within the data support, so that safe policies can be learned within these constraints.
Strengths: The phenomenon that has been identified is surprising; the experiments done to dissect this phenomenon are clear and solid. Overall a solid paper.
Weaknesses: I have only one concern, and welcome further clarification by the authors. The authors claim that their algorithm demonstrates this robustness phenomenon arises due to the survival instinct of offline RL algorithm. But it seems from the paper that the circumstance of this only arises in the situation that the data collected for offline RL has long timescale trajectories. Is this the case? For instance, it does not arise in other circumstance investigated, like when there are multiple sources of data, or other circumstances that deviate from this central feature.
If this is the case, then the authors should state this clearly. This is still very much an interesting phenomenon, but is slightly more limited than the first impression one gets when reading the abstract and introduction that this is a potentially quite a universal phenomenon in offline RL.
If long trajectories is not the only circumstance in which this phenomenon, the authors can make the paper even more clearer by listing all the circumstnaces in which this phenomenon arise, which would be very helpful for the reader.
Regardless, as long as the authors match their conclusions to their demonstrations, this paper demonstrates indeed a very interesting phenomenon arising in offline RL that is worthy of publication.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See questions present in the above sections.
And one more question, about the suggestion at the final part of the abstract (and conclusion):
- if the implicit bias for long trajectories in offline data is the sole feature that is responsible for this wrong labels phenomenon, then is it still reasonable to suggest that "whereby an agent is “nudged” to learn a desirable behavior with 20 imperfect reward but purposely biased data coverage"?
In other words, does the "survival instinct" as discussed in the paper really have the sensitivity to make such a (next step) hypothesis reasonable?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have a limitations section, and are fair. They also nicely clarify that of course, offline RL does not always learn from wrong rewards.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **When positive bias arises & clarification on "long timescale trajectories"**
We thank reviewer Sf8f for providing feedback to our manuscript. The robustness to reward is attributable to an interplay between survival instinct in offline RL algorithms (due to their pessimistic nature) and positive data bias. There can be *multiple* possible factors that cause positive data bias. Length bias is one such factor which requires the length of the trajectory to correlate with higher returns (see line 220-221). Note that length bias is not the same as the dataset having more long trajectories. Further, there can be other factors of positive data bias (Prop. 3 in Appendix C), e.g., potential-based reward shaping or near-optimal behavioral policies (see Appendix B.5 for experiments) which are not related to problem horizon or data trajectory length.
**Positive bias is not a universal phenomenon but it can be common**
We want to highlight that we do *not* intend to mean all existing offline RL datasets have positive data bias. In fact, we explicitly highlighted this in Section 4 (line 258) and showed examples like halfcheetah datasets, where there is no positive data bias. Nonetheless, we also point out that positive data bias often arises (due to factors listed in Prop. 3) in common data collection practices, such as intervention of unsafe or bad rollouts, or using expert demonstrations.
**On nudging the agent to learn desirable behavior**
Finally, to answer the reviewer’s question about how to purposely increase positive bias in data. Let us consider length bias as an example. During data collection, we can early stop a data collection episode if the agent is not performing well. This creates a positive correlation between data trajectory length and performance, and thereby increases the length bias in the dataset. Recall that survival instinct gives offline RL agents the incentive to produce trajectories that stay within support in a long term. Therefore, offline RL algorithms, due to survival instinct, can learn better with such a dataset. We are happy to answer any follow-up questions.
---
Rebuttal Comment 1.1:
Comment: I appreciate the clarifications. This is a good paper; I keep my score at 6. | Summary: In this paper, the authors show that offline RL algorithms have an implicit survival instinct that often allows them to learn good policies with incorrect rewards. The authors argue that this is due to the data being positively biased and the pessimism that constrains offline RL algorithms to the data. This argument is supported by theory saying that when these conditions are met, the learned policy is close to the optimal policy and meets certain safety guarantees. They show several ways in which data bias can be introduced that benefit offline RL algorithms, such as length bias. Lastly, this phenomenon is shown empirically across several offline RL algorithms and benchmarks.
Strengths: * Overall, I really enjoyed this paper. While pessimism is known to play an important role in offline RL, I think that this paper does a good job of illuminating how important pessimism is in these offline benchmarks. I think this will be an important paper for the offline RL community, especially when considering these benchmark tasks.
* The paper is very well written.
* The paper is very comprehensive with a strong Appendix section detailing experiment details and additional results.
* The paper has a lot of theory explaining the survival instinct phenomenon and showing how algorithms in the literature have this survival instinct.
Weaknesses: * I still do not fully understand the argument around length bias (see Questions), and I am not convinced that this is a main cause of the phenomenon shown in Figure 3. In particular, how do we know specifically the length of the trajectories is the important property? I would assume that the medium and medium-expert trajectories datasets just have more support over actions that prevent the agent from falling down. The length of the trajectory just seems like a byproduct of these good actions. More discussion or evidence here would be appreciated.
* The main paper makes it unclear how data was generated for the grid world. While the appendix does have some more information about that, a little more should be explained here to make the example more understandable.
* The offline agent is used on the “wrong reward” but what is the wrong reward? It seems like in the appendix you consider three wrong rewards, but which one is used? It seems like different rewards here could change the outcome drastically.
* (Line 219) The authors claim that a dataset generated by the optimal policy is infinitely positively biased regardless of the reward. This seems like a bold claim to make for all MDPs and all reward functions without a proof.
* What happens with expert D4RL datasets? It would be interesting to see what performance is on this, especially since the claim is that datasets from the optimal policy are infinitely positively biased regardless of the reward.
* Potential typo on Line 1415: The agent starts deterministically in the upper left it seems, not right.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * Line 217 you say that any policy is infinitely positively biased for any rewards resulting from potential-based reward shaping. Isn’t positive bias a property of the offline dataset and not the reward? This didn’t make sense to me.
* Do you have any hypotheses for why some algorithms have different levels of survival instincts? Is it because of the amount of pessimism in the algorithms?
* I still have several questions about length bias:
* I understand that often in benchmarks longer trajectories correspond to higher returns, but it is not clear to me why this always helps. For instance, suppose we have a dataset with trajectories where half the time the robot fell and half the time it did not. If we label all rewards with -1, wouldn’t the agent be incentivized to fall as fast as possible to incur less negative rewards.
* Perhaps I am misunderstanding the assumptions in Proposition 5, but it seems like there is no assumption on how likely long trajectories are to appear in the dataset, only on the returns of long trajectories. I would think there has to be some assumption on the former.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Although I do not think this work necessarily needs to address these limitations, it would be beneficial to have some discussion about them in my opinion:
* While the paper does cover many relevant offline RL benchmarks, it does not explore all types of datasets. In particular, it does not appear that the authors explored cases where there are only sub trajectories of good behavior that need to be stitched together.
* The assumption of positively biased data in Theorem 1 seems strong. Since the definition of positively biased data is about the value of the delta-optimal policy set of the CMDP, there seems to be no intuition for when Theorem 1 will hold when looking at properties of the offline dataset only. In other words, it does not appear that there are any methods that practitioners could use to estimate the role of survival instincts for any arbitrary dataset.
* While there are theoretical results for offline model-based RL works, the paper’s experiments focus on model-free offline RL algorithms only.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Cause of Phenomenon in Fig 3**
We would like to first clarify that length bias means the length of an in-support trajectory *positively correlates* with its return (line 220), not that the dataset has many long trajectories. In fact, a dataset would have no length bias if it has lots of long bad trajectories, e.g., halfcheetah datasets.
We argue that length bias is one of the main causes of Fig. 3’s phenomenon. First, we visualize the presence of length bias in most datasets by showing the positive correlation in Fig. 4. Second, when trained on datasets without length bias, e.g., medium-replay and halfcheetah datasets, we observe that offline RL algorithms are more susceptible to wrong rewards.
In fact, the medium-replay datasets and halfcheetah datasets are more diverse and cover more full-length trajectories than some other datasets with length bias. For example, hopper-medium-replay has 44 full-length trajectories and yet hopper-medium has only 1. This suggests that *"have more support over actions ... from falling down"* is not the main reason for the robustness to wrong rewards.
**Grid World Example**
PEVI produces the same policy and has the same performance under all three wrong rewards (zero, random, and negative) in the grid world example. We will clarify this and add more details on the data generation process in the revision.
**Positive Bias from Optimal Behavioral Policy**
In Appendix C, we provide a proof (Prop. 3 and Appendix C.4.2) of positive bias due to optimal, as well as deterministic near-optimal, behavioral policies. Intuitively, if data is from an optimal policy, all the actions in the support are optimal and therefore by survival instinct the learned policy will be optimal. The point below also empirically validates our statement.
**D4RL Expert Datasets**
In Appendix B.6, we report the results on the expert datasets of hopper, walker, and halfcheetah from D4RL. In Fig. 9, we show that in most scenarios (13 out of 15), offline RL algorithms can consistently achieve expert-level performance using all three wrong rewards, which verifies our theoretical statement.
**Positive Bias from Potential-based Reward Shaping**
We first want to note that there is a typo in line 217: any *policy* $\to$ any *data distribution*.
To answer the reviewer’s question, positive data bias is a property of a data distribution $\mu$ *with respect to a reward class* $\tilde{\mathcal{R}}$, per Definition 2. In line 217, we intended to convey that, if we consider a reward class $\tilde{\mathcal{R}}$ consists of potential-based reward shaping rewards, then any data distribution is $\infty$-positively biased with respect to $\tilde{\mathcal{R}}$. This follows from the fact that the ordering of policies under a potential-based shaped reward is the same as the ordering under the original reward. We will revise the wordings in line 271 to clarify this.
**Levels of Survival Instinct**
There are a few causes. First, different offline RL algorithms are designed based on different notions of pessimism, e.g. behavior regularization, pessimistic values, etc. These different notions may or may not be translatable to the R-admissibility condition (Definition 4 in Appendix C) that controls the survival instinct. Second, offline RL algorithms often have hyperparameters that adjust the level of pessimism, which, as the reviewer suggested, can affect the level of survival instinct. In Appendix E, we theoretically show how to set hyperparameters of several algorithms to be sufficiently pessimistic. In the experiments, we conducted a limited hyperparameter search, which might not always have yielded the same level of survival instinct as one another. Finally, as noted in the limitations, practical implementations may have slightly different properties than the theoretical algorithms due to numerical issues and implementation details.
**Dataset with -1 Reward**
The behavior of an offline RL agent is subject to how "falling" is interpreted. Consider the robot example in the review. Suppose that each trajectory ends immediately when the robot falls. If falling is interpreted as "entering an absorbing state of value 0", the offline RL agent *would* fall, as the reviewer suggested.
But since we stop the trajectory upon falling, we find it more appropriate to consider falling as “entering a state that is unknown” (Footnote 4 & Appendix B.5). In this case, as the agent never sees what happens after falling, an offline RL algorithm with sufficient pessimism would imagine these unknown states have lower values than getting -1 rewards and hence avoid falling.
We note that using -1 reward is actually similar to our experiments with the *negative* reward for hopper and walker. Fig. 7 shows offline RL algorithms are able to learn policies that avoid falling with the negative reward. We also observed that offline RL algorithms can learn good policies with -1 reward in our earlier preliminary experiments.
**Assumption on Prop. 5**
The reason why we do not explicitly make an assumption on the likeliness of long trajectories in Prop. 5 is that it is covered already by the third point in Assumption 3. Notice that time information is part of the state in a finite horizon problem, so Assumption 3 implies that there is a non-zero probability of having full-horizon data trajectories (otherwise, the concentrability coefficient would be $\infty$). When Assumption 3 holds, statistical errors due to finite dataset size are included in the $\iota$ term in Theorem 1 (line 132). We will clarify this in the revision.
**Limitations**
We agree with the reviewer on these limitations and will discuss them in the final version of the manuscript. In particular, we find it to be interesting future work to quantify the amount of positive bias of a data distribution, including the size of reward class $\tilde{\mathcal{R}}$ it has positive bias with respect to, without having to actually run offline RL algorithms.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough rebuttal! I appreciate the clarifications on my questions.
I agree that there is an interesting correlation shown in Figure 4, however I am still not convinced length itself is a driving force of the survival instinct. I realize now that I did not express my thoughts correctly when I said "more support over good actions." Instead I meant to say concentration around good actions (and thus a lack of support over bad actions).
Concretely, in the example that the author's give, there is 1 trajectory in Hopper medium and 44 trajectories in medium replay. It is therefore clear that the medium policy _only_ makes good decisions when it is put in a state where it is about to fall. On the other hand, medium-replay has bad actions when it is about to fall, and as such, the offline RL policy will not be as penalized for choosing these actions.
Although I am still unconvinced about length bias, I still think the results are important for the offline RL community, and I will keep my score where it is.
---
Reply to Comment 1.1.1:
Title: Thank you for your quick response.
Comment: Thanks for getting back to us so quickly!
From your response, we found our explanation regarding Fig. 3 in the rebuttal might have created a confusion about how to interpret Figure 4 and the length statistics. Please allow us to further clarify that.
When we wrote
> *hopper-medium-replay has 44 full-length trajectories and yet hopper-medium has only 1*
we intended to mean that, *among all the trajectories*, only 44 and 1 of them have the full length of 1000 steps. We did not mean that "there is 1 trajectory in Hopper medium and 44 trajectories in medium replay". In D4RL, there are in total 2187 trajectories in hopper-medium dataset and 2041 trajectories in hopper-medium-replay dataset. We plotted each trajectory as a dot in Fig 4. We can see that most of them are incomplete trajectories; there are just a few dots (1 dot and 44, respectively, if we count) on the rightmost of Fig 4 having full length.
Therefore, the medium policy does not always make good decisions when it's about to fall, because it actually falls in the majority of data trajectories, except one (the average trajectory length of hopper-medium is 457.25 steps, less than half the full length).
The main difference between the two datasets is that: the behavior policy in hopper-medium always takes actions leading to high instantaneous reward until it falls. On the other hand, in hopper-medium-replay, some policies take actions that would continue to survive but have low instantaneous reward (see the dots on the lower right of the hopper-medium-replay subfigure of Fig 4).
Such a difference in policy behaviors is what creates the length bias, i.e., the positive correlation between trajectory return and length, in hopper-medium. On the other hand, hopper-medium-replay does not have this property (due to the existence of surviving low-return long trajectories).
Finally, the survival instinct in offline RL makes the agent favor long trajectories that can stay in the data before optimizing for the reward. Since all long trajectories have high return when the data has length bias (like hopper-medium), we see the robustness to wrong rewards in Fig 3.
We are happy to answer any further questions. | Rebuttal 1:
Rebuttal: We thank all reviewers for providing feedback to our manuscript. It is encouraging to see that reviewers generally find our empirical findings and theoretical analysis relevant to the offline RL community. We are excited to see that the reviewers propose a few interesting future research directions, and we are happy to provide a discussion on these future directions in the revision. We also would like to thank reviewers for pointing out typos in our manuscript, which will be fixed in the final version. We will address specific questions and concerns from each reviewer in the individual responses.
In the attached PDF, we include a table which has additional experimental results on offline safety gymnasium [1] as per suggestions by reviewer k1TU. We observe that *offline RL with a naïve data filtering strategy can achieve comparable performance as the per-task best performing state-of-the-art offline safe RL algorithm*. We hope our new results can give reviewers more confidence on the safety property of offline RL.
[1] Liu et al. Datasets and Benchmarks for Offline Safe Reinforcement Learning. 2023.
Pdf: /pdf/c69ae948d64e299b60ded587ddff2898de14efe6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
FD-Align: Feature Discrimination Alignment for Fine-tuning Pre-Trained Models in Few-Shot Learning | Accept (poster) | Summary: This work aims to adapt large pre-trained vision-language model to few-shot tasks. To this end, the authors propose to decouple the category-related and category-independent information to alleviate overfitting when adapting large model to few samples. They claim to maintain the visual encoder's ability to extract category-independent information during fine-tuning. They conduct experiments on various datasets, including CoOp-datasets and OOD datasets to verify the effectiveness of proposed methods.
Strengths: 1.The paper is organized and written well and the paper looks well polished. The overall story is clear to me.
2.The authors conduct experiments on extensive datasets for comprehensive evaluation.
Weaknesses: 1.The motivation is not very convincing. First, as shown in Figure 1 and line 36-42, I do not think the separability among domains could significantly affect the performance of downstream tasks, since we conduct classification within single domain. Second, as discussed in line 110-118, why we should contain the ability of extracting category-independent information. In fact, if the model can ignore the category-independent information and only extract the category-related information, it will obtain better performance and be very robust, even on different domain.
2.The proposed method is kind of naive and can not be considered as a systematic approach.
3.The experments are not very convicing, although they are conducted on various datasets. As shown in Table 1, some existing works are not compared, like CoOp, PLOT. I do not understand why this work use ViT-B as backbone, since many baselines use ResNet50. This work should conduct experiments with ResNet50 and compared with the reported results of baselines, instead of using a new backbone and reproduce the results baselines. Even with ViT-B, PLOT++ has achieve 70.6% average accuracy on 1-shot tasks, but this work only achieve 63.92%.
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: Please answer the questions in weakness.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 3 good
Contribution: 1 poor
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1 : Motivation is not very convincing.
A1: Sorry for the confusion. Our primary objective focuses on **leveraging the pre-trained CLIP model for downstream few-shot tasks**. Specifically, when we have a dataset with limited samples, our aim is to enable CLIP to quickly adapt to it, and we also hope that the updated model can **generalize to other similar datasets**.
The most straightforward approach is to fine-tune CLIP using downstream data. However, there exist two issues. First, since the downstream data is limited in the few-shot setting, directly fine-tuning CLIP with **insufficient data may lead to overfitting**, thereby diminishing the model's performance. Second, this direct fine-tuning might cause the** loss of the ability to distinguish between causal information and spurious information** in the original CLIP, which can negatively impact the updated model's out-of-domain (OOD) generalization.
To address these two issues, we propose a controlled fine-tuning procedure for CLIP. Specifically, by imposing constraints that sustain its capability to differentiate between causal and spurious features, the updated model is steered away from both overfitting the few-shot data and fitting the spurious features.
Q2: I do not think the separability among domains could significantly affect the performance of downstream tasks (Figure 1 and line 36-42).
A2: Figure 1 of the paper visualizes the features of the same category under different domains. In order to better demonstrate the effectiveness of our motivation and the effect of disentanglement, we visualize the features of different categories under different domains. As shown in Figure 2 in the attached pdf of global response. We employ t-SNE dimensionality reduction to visualize distinct categories of image features within the contexts of both cartoon and sketch representations. This enables us to estimate the distribution of each category, delineated by an elliptic region. CLIP exhibits the capability to discern nuanced variations across diverse domains and categories. Subsequently, we fine-tuned the model on miniImageNet's training set. **On sketch images with large differences in domain from miniImageNet, the distinction of the feature extracted by fine-tunes model between the two categories deteriorates. And the distinction between domains becomes less obvious**. While our method can guarantee the differentiation between categories with larger differences and between domains.
Q3: The proposed method is kind of naive and can not be considered as a systematic approach.
A3: Thanks for the comments. While our approach may indeed seem simple on the surface, we view this **simplicity as one of its core strengths, rather than a limitation**. The ease of implementation and broad applicability that arise from this simplicity are valuable attributes in our opinion. We want to emphasize that **simplicity does not necessarily compromise innovation or the efficacy of a method**. In our particular case, this approach has demonstrated robust results, as substantiated by the experiments detailed in our submission.
For additional context, we can draw a comparison to one of our key baselines, WiSE-FT. Despite employing what might be considered a straightforward weighted fusion strategy for subsequent classification, WiSE-FT has achieved remarkable success, even being recognized as a finalist for the prestigious CVPR2022 Best Paper award.
We appreciate your insight and are open to further discussion if you have any specific suggestions or concerns that could enhance our work.
Q4: Why not compared with some existing works.
A4: Existing approaches mainly divide into two categories, one is to use additional adapters to adapt to the downstream task. The other is the learning of the prompt to improve the model performance. In contrast, our approach focuses on **improving the fine-tuning process of the CLIP backbone**. The fine-tuned backbone can be directly applied to previous methods to improve the performance of ID and OOD.
We acknowledge the omission of pertinent comparisons with CoOp, Plot++, and other pertinent works. Your feedback is invaluable, and we are committed to rectifying this oversight in the revised rendition of our related work section.
Furthermore, the results in **Table 2 and Table 3 in global response** prove the efficacy of our fine-tuned backbone. Experimental results demonstrate that our improved backbone can be seamlessly integrated into existing methods with noticeable performance gains.
Q5: Why use ViT-B/32 as backbone rather than ResNet.
A5: We need to fine-tune the backbone relative to the existing methods, so we choose ViT-B/32 as the backbone with the smallest memeroy in the CLIP open source backbone. Nevertheless, our approach is applicable under different backbone. We will add the performance of our method on ResNet50 in the revised version.
Previous submissions suffered from inadequate hyperparameter tuning. In response, we meticulously re-tuned our hyperparameters to bolster performance, consequently yielding superior results. To provide a comprehensive perspective, we present the latest outcomes in the **Table 1 in global response**. Also, PLOT++ uses a stronger backbone ViT-B/16 relative to us.
[1] Robust fine-tuning of zero-shot models
[2] Conditional Prompt Learning for Vision-Language Models
[3] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models
---
Rebuttal Comment 1.1:
Title: We would be grateful if you could take a look at the response
Comment: Dear Reviewer TEuf,
Thank you for reviewing our paper. Just a friendly reminder that **the author-reviewer discussion will close soon**, and we eagerly await your feedback. In response to your comments, we've detailed our motivation to clear up any confusion and explained how our method differs from existing ones. To further validate our approach, we've added relevant experiments to the global response, even with space constraints. Could you please take a look at these updates?
We're here to discuss any more questions or concerns you may have about our paper.
With warm regards,
Authors
---
Rebuttal 2:
Title: The responses do not solve my concerns.
Comment: First, my concerns about motivation are not fully answered. Why distinguishing different domains is helpful for distinguishing different classes? Why the category-independent information is useful for classification? In fact, if the model can ignore all category-independent information and extract same features for same classes but different domains, it will perfectly classify different class, even for OOD samples.
Second, the authors are suspected of deliberately using the backbone ViT-B/32 that is not commonly used in existing works to avoid direct comparison. The authors do not provide a convincing reason for not using ResNet50, so that it can really compare with most existing works. The authors also do not use ViT-B/16, which cannot be compared with PLOT++. Where are the results on ResNet50 and ViT-B/16?
---
Rebuttal Comment 2.1:
Title: A more detailed explanation of our motivation and results on other backbone.
Comment: Thank you for your reply to our feedback. We will provide further feedback on the motivation and performance with other backbone.
**Q1: A more detailed explanation of our motivation.**
Ideally, it is desirable for feature to exhibit similarity when comparing data from the same category across many domains. CLIP demonstrates a strong ability to effectively differentiate between various categories and domains, and the features it extracts already have this property. Direct fine-tuning of the model will result in a better fit to the categories and domains of the fine-tuned data, at the corresponding cost of weaker recognition of domains not contained in the fine-tuned data. As in Figure 1 of our paper, the fine-tuned CLIP is less able to discriminate unseen domains; as shown in the sketch domain of Figure 2 in the PDF of the global rebuttal, CLIP is also less able to discriminate between different classes of data on unseen domains. This suggests that when fine-tuning CLIP, it also fine-tunes the features of domains not contained in the dataset. Since the dataset does not contain this type of data, fine-tuning for this type of feature is inaccurate, which leads to a weakening of the ability of the fine-tuned CLIP to recognize class-related features. In order to preserve CLIP's ability to recognize the same category, we need to not fine-tune features of unseen domains during fine-tuning. Therefore, we need to ensure that the fine-tuned CLIP is still able to distinguish data from different domains, so we need to ensure the ability of CLIP to distinguish category-irrelevant information during the fine-tuning process.
**Q2: The results with other backbone.**
To ensure a meaningful comparison with PLOT++, we conducted an evaluation of our approach on ImageNet dataset, employing the ViT-B/16 backbone. As shown in the table, the results attributed to PLOT++ are sourced from its GitHub repository. Notably, our approach has achieved superior performance in specific shot numbers and demonstrates a superior average performance across all shot numbers. Subsequently, we will complement the results evaluated on other datasets in our revised paper.
| METHOD | 1 shot | 2 shot | 4 shot | 8 shot | 16 shot | Average |
|:--------:|:---------:|:---------:|:------:|:------:|:-------:|-----------|
| PLOT++ | 66.45 | 68.28 | 70.40 | 71.31 | 72.60 | 69.80 |
| FD-Align | **69.24** | **69.65** | 70.28 | 71.02 | 71.60 | **70.36** |
In addition, given that our paper introduces a simple and effective algorithm, presents comprehensive experiments, provides accessible code ensuring reproducibility, and raises no ethical concerns, **we believe it does not merit a reject status**, whose definition is
>3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. | Summary: This paper aims to enhance the performance of pre-trained CLIP for few-shot learning tasks while maintaining their generalizability and mitigating the risk of overfitting. To achieve this goal, the key contribution of this paper is the spurious information extractor that captures category-independent information from image features. By aligning the probability distributions of image features under the spurious weights, the fine-tuned model's ability to extract spurious features is preserved. Overall, this paper offers a robust CLIP fine-tuning method for improving the performance and generalization capabilities of pre-trained models in few-shot learning tasks.
Strengths: 1) Cross-domain and OOD few-shot learning are important research topics in this field. In this respect, the goal of the proposed method is backed by strong motivations.
2) Good performance. Fairly extensive experiments demonstrate that CLIP pretraining is promising for the few-shot and OOD few-shot learning tasks. Ablation studies indicate the innovations are helpful.
Weaknesses: 1) In my opinion, utilizing features from CLIP is knid of conflicting to few-shot settings. Sine CLIP is pretrained on multi-class large datasets, there could be some information leakage on similar classes, can it still be considered a few-shot learning task in this situation? As can be seen from Table 3, performance on some datasets that are unfamilar with pretrained CLIP (e.g., ChestX) are relatively poor.
2) More discussions/results regarding the cross-domain and dataset settings should be added. For instance, how about the performance on the same class under different domains (e.g., painted, sketch and real dog)? What kinds of pre-training source images (domain or object) are more helpful?
3) There is insufficient information about implementation details, such as the Isolation Forest and K-means algorithms, which makes it difficult to reproduce the work.
4) Apart from the training stability, it is not specified how long the proposed method takes for the entire training (i.e., pretraining and fine-tuning) and how efficient it is compared to existing studies.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) As mentioned in Weakness 1, the autors should give clearer explanations, or analyze the similarity between the pretraining dataset and the finetuning dataset.
2) For weakness 2, it is suggested to provide some cross-domain comparisons on the same class, several examples on the classification confidence are also helpful.
3) Typo errors such as ""wheile" -> "while" in line 260.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: potential limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Is using features from CLIP kind of conflicting to few-shot settings?
A1: The consideration of CLIP features within the setting of few-shot learning remains a valid approach. Despite CLIP's extensive pretraining on vast datasets, it is important to acknowledge the potential for significant **deviations between its acquired knowledge and the specific requirements of downstream tasks**. Notably, recent advancements have witnessed the emergence of numerous CLIP-based few-shot learning paradigms, exemplified by methodologies such as Tip-adapter[1], APE[2], VPT[3], and others. These endeavors underscore the ongoing relevance and applicability of CLIP-based techniques in the context of few-shot learning scenarios.
Q2: Why is performance on ChestX so poor?
A2: The setting in Table 3 involves fine-tuning CLIP on miniImageNet's training set and subsequently evaluating its performance on the ChestX dataset, which predominantly consists of medical images. The distinct dissimilarity in overall data distribution and categories between ChestX and miniImageNet contributes significantly to the observed disparity in performance. Given CLIP's classification performance is still low, the discernible performance difference is attributable to the lack of pertinent information within CLIP's training corpus. Notably, medical images are **difficult to classify without a specialized background** due to the domain-specific challenges they pose. This observed performance discrepancy aligns with analogous findings in related research[4].
Q3: Some cross domain comparisions on the same class.
A3: In response to your inquiry, we performed a comprehensive evaluation regarding cross-domain and dataset settings. Specifically, we divided the few shot training set on imagenet and fine-tuned the model. We then performed performance tests on ImageNet and its variants datasets. The results of our study on 16 shots are shown in the table below, and our approach significantly improves cross-domain performance compared to the direct fine-tuning alternative. And the direct application of our fine-tuned backbone to existing methods can significantly improve performance.
| Method | imageNet | imageNetA | imageNetR | imageNetS | imageNetV2 |
|:--------------------------:|:--------:|:---------:|:---------:|:---------:|:----------:|
| CLIP | 63.34 | 31.57 | 68.45 | 42.31 | 55.92 |
| FT | 64.91 | 30.05 | 68.7 | 42.24 | 57.63 |
| FT+**FD-Align** | 66.39 | 31.8 | 69.7 | 43.5 | 57.73 |
| Tip-adapter | 65.49 | - | - | 42.48 | 57.58 |
| Tip-adapter+**FD-Align** | 65.49 | - | - | 43.84 | 59.10 |
| Tip-adapter-F | 68.43 | - | - | 42.54 | 59.58 |
| Tip-adapter-F+**FD-Align** | 68.70 | - | - | 43.67 | 60.17 |
| APE | 66.55 | - | - | 43.28 | 58.31 |
| APE+**FD-Align** | 67.69 | - | - | 44.23 | 59.36 |
| APE-T | 68.74 | - | - | 43.23 | 59.58 |
| APE-T+**FD-Align** | 69.15 | - | - | 44.04 | 60.83 |
Q4: The implementation details about Isolation Forest and K-means algorithms.
A4: We have explained the details of our experiment in section 4.1. Specifically, 60 data points were retained in Isolation Forest and subsequently clustered into 20 classes using K-means. In addition, we have provided the **code in the supplement**. Again, we will subsequently **open source the code** to ensure the reproducibility of the method.
Q5: The time cost of our method
A5: In order to address this issue, we have compared the time required by our method compared to the direct fine-tuning method. The following table shows the results of 1 shot fine-tuning on imageNet. The results show that our method does not introduce much extra time consumption. Our method can be trained once and then applied directly to existing methods. Moreover, our approach does not introduce any additional time overhead compared to CLIP during the inference phase.
| Method | Time |
|--------------|------------|
| Fine-tuneing | 7min 9s |
| Ours | 8min 53s |
Q6: Some typos of this paper.
A6: Thanks for pointing out the problem, we will fix it in our latest paper.
[1] Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling
[2] Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement
[3] Visual Prompt Tuning
[4] Channel Importance Matters in Few-Shot Image Classification | Summary: This paper introduces a novel approach to tackle few-shot learning utilizing the powerful pretrained CLIP model. The primary objective is to construct multiple CLIP text prompts with different context and construct a term to regularize the learning process. This helps prevent overfitting to irrelevant correlations. Specifically, the method involves creating specific prompts for each class, combining category-independent factors such as "toy," "drawing," and "plastic," with the object category name. A loss term is designed to ensure that the extraction of spurious information remains consistent both before and after fine-tuning the CLIP model.
Strengths: 1. The notion of using the capabilities of the CLIP model to enumerate potential spurious factors, and subsequently constructing a regularization mechanism for few-shot learning, is a intriguing concept.
2. Some experimental results presented in this paper, particularly the findings depicted in Figure 4 and Figure 6, seems to be promising and somehow validates the proposed method.
Weaknesses: 1. The basic assumption of this paper, that if we fine-tuning CLIP model, the model tends to over-fit to the spurious information, needs a further validation. For example, it might be possible to use GradCAM or other visualization method to showcase such a phenomenon.
2. The notation is somewhat confusing and a careful proof-reading is needed as there are many typos in this paper, for example:
(1) in line 134, The definition of C should be mentioned.
(2) the index i is used twice in the equation after line 142, this causes confusion
(3) W_spu was mistakenly written as W_sup in the same equation
(4) The prompts in Figure 3 give an good example of category-independent information mentioned in line 142. It is better to refer to Figure3 in line 142 as a concrete example.
(5) It is better to choose a different notation to denote object class, and context class, e.g., toy, plastic, ... in the equations
3. What is Fine-tuning CLIP baseline in Table 2? Is it using the first loss term only? If yes, the proposed method only leads to marginal improvement over this simple baseline. If that is the case, then the benefit of the proposed strategy might not be significant.
4. Also, the improvement by using IF and K-means is also marginal. In most cases, the improvement is less than 0.5%
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. What is Fine-tuning CLIP baseline in Table 2? Is it using the first loss term only?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes, the limitation has been properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: The validation of the basic assumption.
A1: It is important to point out that our basic assumption is that **in few-shot learning, due to insufficient sample size to fine-tune the model, the model is prone to overfitting into the causal and spurious information it currently sees**. That is, the model may **incorrectly treat spurious information in the training sample as causal information of the current category**, thus losing the ability to distinguish between different spurious information. Our hypothesis can be demonstrated quite intuitively in Figure 1 in the paper. Figure 1 shows the t-SNE dimensionality reduction visualization of features extracted by different models for the same category over different domains. Compared to CLIP, the fine-tuned model is less capable of recognizing variance on data from different domains.
Q2: The notation is somewhat confusing and careful proof-reading is needed as there are many typos in this paper.
A2: Thanks for your suggestions, we will update them in later version.
Q3: What is Fine-tuning CLIP baseline in Table 2? Is it using the first loss term only?
A3: Yes, it is using the first term only. Previous submissions suffered from inadequate hyperparameter tuning. In response, we meticulously re-tuned our hyperparameters to bolster performance, consequently yielding superior results. To provide a comprehensive perspective, we present the latest results in the subsequent table.
| METHOD | 1 shot | 2 shot | 4 shot | 8 shot | 16 shot |
|:---------------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| CLIP(60.33) | | | | | |
| LP-CLIP | 22.17 | 31.90 | 41.20 | 49.52 | 56.13 |
| WiSE-FT | 63.36 | 64.27 | 66.16 | 68.13 | 69.55 |
| Tip-Adapter | 65.52 | 67.40 | 69.56 | 71.58 | 72.72 |
| FT | 65.24 | 67.42 | 69.85 | 71.73 | 73.00 |
| **FT+FD-Align** | **65.65** | **68.26** | **71.25** | **73.63** | **75.59** |
Q4: The improvement by using IF and K-means is marginal.
A4: The utilization of IF aims to eliminate inappropriate prompts from the prompt group and mitigate the influence of redundant prompts. This strategy proves especially beneficial when the prompt group's quality is suboptimal. Given that we employ the prompts groups of CLIP for ImageNet, characterized by a paucity of inappropriate prompts, the performance enhancement is comparatively modest. Nevertheless, a performance gain is still attained. Furthermore, it is noteworthy that this procedure takes place during the training initialization phase, thereby exerting no impact on training velocity. Conversely, a reduction in the quantity of spurious factors contributes slightly to the acceleration of the training process.
---
Rebuttal Comment 1.1:
Title: We would be grateful if you could take a look at the response
Comment: Dear Reviewer vxTm,
Thank you for reviewing our paper. Just a friendly reminder that **the author-reviewer discussion will close soon**, and we eagerly await your feedback. In response to your comments, we've made updates including a clearer t-SNE visualization to illustrate our assumption, an explanation of our choice to use Isolation Forest and Kmeans over manual removal, and adjustments to the hyperparameters to enhance performance. Our method now shows marked improvement over direct fine-tuning. Could you please take a look at these updates?
We're here to discuss any more questions or concerns you may have about our paper.
With warm regards,
Authors | Summary: This paper studies the problem of fine-tuning a pre-trained CLIP to downstream classification tasks with few-shot samples. The authors propose to fine-tune the category-dependent feature while retain the category-independent feature in order to improve the robustness of the fine-tuned model. The proposed method is compared with several baselines on several benchmarks.
Strengths: The motivation is clear. The paper is clearly written.
Weaknesses: - I'm not fully convinced by several key statements/arguments in the paper:
- L114-118, why retaining the spurious features can help model robustness? I think the less spurious feature it learns, the better it transfers to different domain? I also do not understand "This allows the model to combine different spurious information based on the category information, thereby maintaining generalization when the object appears in a new context, style, or domain."
- L131-132, "the object represents the causal information and 'a photo of' represents the spurious information". I do not understand why "a photo of" can contain spurious information? As the authors explain, the spurious information includes domain, style, or background informaiton. However, "a photo of" contains neither of them. It does not discriminate between different domains or backgrounds or styles. Then the spurious feature extractor is not actually extracting any spurious informaiton.
- Only fully fine-tuning and wise-ft are compared. What about other parameter-efficient fine-tuning methods such as LoRA [1] and VPT [2]? Also the gap between fully fine-tuning and FD-Align seems not really significant.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: See Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Why retaining spurious features can help model robustness?
A1: Sorry for the confusion. In fact, our objective is not to preserve spurious features of an image, but to **preserve the ability of CLIP to distinguish spurious features**. Illustrated in Figure 1a, CLIP demonstrates the capability to differentiate task-irrelevant information (deemed as spurious information within the classification task). Regrettably, direct fine-tuning destroys this capability, resulting in a model's inability to differentiate images affected by distributional bias.
In few-shot tasks, during the fine-tuning process, the classifier encounters a combination of causal and spurious information within the features. Due to insufficient data, the fine-tuning procedure can readily lead to overfitting, causing the classifier to incorporate spurious features into its decision-making framework. This phenomenon leads to a decrease in the model's ability to recognize spurious information and thus errors in the testing phase.However, we impose constraints on the training process to preserve the model's ability to distinguish spurious features. At this point, the model learns mainly task-relevant information during the fine-tuning process. It refrains from overfitting spurious features onto the prevailing category attributes. Consequently, this safeguard guarantees that the model avoids overfitting to spurious information.
Q2: How to understand "This allows the model to combine different spurious information based on the category information, thereby maintaining generalization when the object appears in a new context, style, or domain."?
A2: To facilitate comprehension, we employ the object recognition task as an illustrative example. Within the context of the target recognition task, the object represents our task-relevant information, while the scene embodies our task-irrelevant information, **coexisting simultaneously**. Owing to the limited volume of training samples inherent in few-shot learning. Moreover, direct fine-tuning often leads to pronounced overfitting within the scene information abstraction, thereby compromising the model's capacity to differentiate between spurious and causal information. Consequently, during instances where the current category emerges within an unseen scene, the model's ability to discern objects within said scene may falter, consequently yielding classification errors. Notably, our proposed methodology ensures the retention of the model's discriminative acumen between causal and spurious information. In scenarios where objects manifest in previously unseen scenes, our model adeptly discriminates between object-specific and scene-specific information, consequently enhancing its proficiency in recognizing scenes absent from the training dataset. This, in turn, augments the overall generalization.
Q3: Why does "a photo of" contain spurious information?
A3: "a photo of" can be regared as a **style information**. Compared to sketch and painting, photo has a distinctly different style. Therefore, it is also a kind of spurious information.
Q4: Why not compared with parameter-efficient fine-tuning methods?
A4: Our goal is to fine-tune a better performing, more generalizable backbone for fine-tuning, so that it can replace the existing method's backbone and improve performance. In the following table, we provide comparative performance results on imagenet for replacing our backbone with the existing method.
|METHOD|1shot|2shot|4shot|8shot|16shot|
|-|-|-|-|-|-|
|Tip-Adapter[3]|64.11|64.36|64.63|65.17|65.49|
|Tip-Adapter+**FD-Align**|64.51|65.33|65.76|66.79|67.28|
|Tip-Adapter-F[3]|64.64|65.18|65.78|67.21|68.43|
|Tip-Adapter-F+**FD-Align**|64.86|65.61|66.11|67.58|68.70|
|APE[4]|65.36|65.69|66.00|66.55|66.55|
|APE+**FD-Align**|66.71|67.29|67.40|67.76|67.69|
|APE-T[4]|65.89|66.18|66.82|67.99|68.74|
|APE-T+**FD-Align**|66.84|67.37|67.81|68.73|69.15|
Q5: The gap between fully fine-tuning and FD-Align seems not really significant.
A5: Thanks for bringing this to our attention. Upon careful examination, we determined that the minor gap of method compared to fully fine-tuning was due to inadequate hyperparameter tuning. In fact, after re-tuning the hyperparameters of our method, a significant enhancement in performance can be observed. The updated results are shown in the table below.
| METHOD | 1 shot | 2 shot | 4 shot | 8 shot | 16 shot |
|:---------------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| CLIP(60.33) | | | | | |
| LP-CLIP | 22.17 | 31.90 | 41.20 | 49.52 | 56.13 |
| WiSE-FT | 63.36 | 64.27 | 66.16 | 68.13 | 69.55 |
| Tip-Adapter | 65.52 | 67.40 | 69.56 | 71.58 | 72.72 |
| FT | 65.24 | 67.42 | 69.85 | 71.73 | 73.00 |
| **FT+FD-Align** | **65.65** | **68.26** | **71.25** | **73.63** | **75.59** |
---
Rebuttal Comment 1.1:
Title: We would be grateful if you could take a look at the response
Comment: Dear Reviewer ePYg,
Thank you for reviewing our paper. Just a friendly reminder that **the author-reviewer discussion will close soon**, and we eagerly await your feedback. In response to some misunderstandings about the spurious feature, we've clarified it, hoping to address your concerns. We've also detailed how our method differs from the parameter-efficient fine-tuning method and included an experiment to prove our approach's effectiveness. Could you please take a look at these updates?
We're here to discuss any more questions or concerns you may have about our paper.
With warm regards,
Authors
---
Rebuttal Comment 1.2:
Title: The concern remains for Q1
Comment: I do not quite understand why "preserve the ability of CLIP to distinguish spurious features" can avoid overfitting on spurious features? Say for a task of recognizing cows, we have training set of 5 images, each with a cow on the meadow, and the test set is a cow on the desert. If the model can recognize spurious features (meadow/desert), then the model might learn to classify based on the spurious features (meadow/desert) instead of causal features (cows). In contrast, if the model cannot recognize the spurious features, it can only classify based on the causal features, which is more robust. Therefore, without "the ability to distinguish spurious features", the model is more robust. Please correct me if I'm wrong.
In the rebuttal, the authors say "...causing the classifier to incorporate spurious features into its decision-making framework. This phenomenon leads to a decrease in the model's ability to recognize spurious information...". This seems self-contradictory. Why learning to make decisions based on spurious features will harm the ability to recognize spurious features?
---
Reply to Comment 1.2.1:
Title: A more detailed explanation of our motivation with examples
Comment: Thank you for your feedback. We will explain our motivation in detail with example.
In classification tasks, the most robust way is to have the model accurately recognize semantic information about the target object. It is well known that in few-shot learning, it is prone to use a spurious feature as a casual feature. e.g., when the training set is cows in the meadow and camels in the desert and the test sample is cows in the desert, it is likely to classify the cows in the desert as camels. To solve this problem, one way is to learn more robust semantic information of the target objects (features of cows and camels); the other is to enable the model to distinguish spurious information (meadow and desert) in the dataset without using it as a basis for classification. Fortunately, CLIP is able to robustly distinguish object features and does a good job of distinguishing different spurious features. Directly fine-tuning CLIP makes it learn features of the dataset better (cows in the grass and camels in the desert), but at the cost that the features of the current category that appear in other scenarios are also fine-tuned (cows in the desert), thus destroying the ability of CLIP to correctly discriminate data that is not contained in the dataset (cows in the desert). As shown in Figure 1 in our paper, direct fine-tuning of CLIP destroys its ability to distinguish between domains that do not exist in the dataset. As shown in Figure 2 of the pdf of global rebuttal, direct fine-tuning of CLIP on imageNet corresponds to a diminished ability to distinguish features of different classes of objects on sketch. This suggests that fine-tuning destroys the ability of CLIP to distinguish spurious information that is not contained in the dataset, which leads to a weaker ability to recognize targets in this spurious information. Therefore, we need to ensure that during fine-tuning, CLIP only fine-tunes the features of the data contained in the dataset (cows in the grass, camels in the desert) and does not change the features of the spurious information that is not contained (cows in the desert). So we need to constrain the training process during fine-tuning so that it retains the ability to distinguish spurious information.
Furthermore, retaining the ability of the model to distinguish spurious information and not using spurious information as a basis for classification are not contradictory. If the model does not have the ability to distinguish spurious information, for example, a cow in the desert, it is likely to use the desert as causal information and thus classify it as a camel. When the model has the ability to distinguish between spurious information, it can distinguish between cows in the desert, where the desert is spurious information and the cows are causal information, and thus be able to classify it correctly.
In addition, given that our paper introduces a simple and effective algorithm, presents comprehensive experiments, provides accessible code ensuring reproducibility, and raises no ethical concerns, **we believe it does not merit a reject status**, whose definition is
>3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. | Rebuttal 1:
Rebuttal: ## Additional discussion in related work.
In the context of few-shot learning, full fine-tuning of the pre-trained model often results in overfitting due to the limited sample size, which consequently diminish the model's generalization ability. Therefore, it is common practice in few-shot learning scenarios to fix the feature extractor while fine-tuning the classification head or training additional structures. For instance, CoOp [1] models prompt’s context words with learnable vectors, keeping the entire pre-trained parameters fixed. Tip-Adapter[2] and APE[3] do not require any back propagation for training the adapter but create the weights through a key-value cache model constructed from the few-shot training set. VPT[4] introduces extra learnable parameters into the input space. However, all these methods are processed with the backbone frozen, while our paper aims to further explore the possibility of fine-tuning the backbone itself.
## Figure
We have updated the method figure and the visualization of the features extracted by the different models in the pdf.
## Table 1
Previous submissions suffered from inadequate hyperparameter tuning. In response, we meticulously re-tuned our hyperparameters to bolster performance, consequently yielding superior results. To provide a comprehensive perspective, we present the latest outcomes in the subsequent table.
| METHOD | 1 shot | 2 shot | 4 shot | 8 shot | 16 shot |
|:---------------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| CLIP(60.33) | | | | | |
| LP-CLIP | 22.17 | 31.90 | 41.20 | 49.52 | 56.13 |
| WiSE-FT | 63.36 | 64.27 | 66.16 | 68.13 | 69.55 |
| Tip-Adapter | 65.52 | 67.40 | 69.56 | 71.58 | 72.72 |
| FT | 65.24 | 67.42 | 69.85 | 71.73 | 73.00 |
| **FT+FD-Align** | **65.65** | **68.26** | **71.25** | **73.63** | **75.59** |
## Table 2
Performance comparison of replacing our fine-tuned backbone onto an existing method on ImageNet.
|METHOD|1shot|2shot|4shot|8shot|16shot|
|-|-|-|-|-|-|
|Tip-Adapter[2]|64.11|64.36|64.63|65.17|65.49|
|Tip-Adapter+**FD-Align**|64.51|65.33|65.76|66.79|67.28|
|Tip-Adapter-F[2]|64.64|65.18|65.78|67.21|68.43|
|Tip-Adapter-F+**FD-Align**|64.86|65.61|66.11|67.58|68.70|
|APE[3]|65.36|65.69|66.00|66.55|66.55|
|APE+**FD-Align**|66.71|67.29|67.40|67.76|67.69|
|APE-T[3]|65.89|66.18|66.82|67.99|68.74|
|APE-T+**FD-Align**|66.84|67.37|67.81|68.73|69.15|
## Table 3
A comparison of the performance of replacing our fine-tuned backbone on the existing method in the OOD task. The backone is fine-tuned on 16 shots imageNet.
| Method | imageNet | imageNetA | imageNetR | imageNetS | imageNetV2 |
|:--------------------------:|:--------:|:---------:|:---------:|:---------:|:----------:|
| CLIP | 63.34 | 31.57 | 68.45 | 42.31 | 55.92 |
| FT | 64.91 | 30.05 | 68.7 | 42.24 | 57.63 |
| FT+**FD-Align** | 66.39 | 31.8 | 69.7 | 43.5 | 57.73 |
| Tip-adapter | 65.49 | - | - | 42.48 | 57.58 |
| Tip-adapter+**FD-Align** | 65.49 | - | - | 43.84 | 59.10 |
| Tip-adapter-F | 68.43 | - | - | 42.54 | 59.58 |
| Tip-adapter-F+**FD-Align** | 68.70 | - | - | 43.67 | 60.17 |
| APE | 66.55 | - | - | 43.28 | 58.31 |
| APE+**FD-Align** | 67.69 | - | - | 44.23 | 59.36 |
| APE-T | 68.74 | - | - | 43.23 | 59.58 |
| APE-T+**FD-Align** | 69.15 | - | - | 44.04 | 60.83 |
[1] Conditional Prompt Learning for Vision-Language Models
[2] Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling
[3] Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement
[4] Visual Prompt Tuning
Pdf: /pdf/355ef98b7843344712971f9da806340e2b85ecae.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a new few-shot learning method leveraging the pre-trained multi-model backbone CLIP. The method aims to eliminate the spurious information within the text embedding, and consequently regularize the image features to deliver better few-shot learning results. To do so, the authors propose to average pooling the text embeddings for the non-contextual information and enforce their KL distance with the image features. The evaluate the proposed method, the authors conduct experiments on multiple benchmarks in different settings.
Strengths: -- The authors’ motivation sounds interesting. It’s a new angle to disentangle the text embeddings and accordingly regularize the image features.
++ The results are looking good, the proposed method achieves good results on multiple benchmarks and settings (1, 2, 4, 8, 16 shots).
Weaknesses: -- Although the motivation is clear and interesting and the authors claim to disentangle the causal information from the spurious information, I didn’t see any clear visualisations to support their claim. In Figure 1, the authors show better class centroids for their method, but it cannot directly show the effect of disentanglement. I suggest the authors show some similarity maps on both learned causal and spurious representations in future versions.
-- For the same purpose, what if employing a small off-the-shell saliency model to first mask the spurious information?
-- This paper can further benefit from improved organization. In Section 2.2, the authors simply list the related works without meaningful discussions. The readers, especially the ones that are not directly working on the same topic would be confused about details such as 1) Is it a common practice to jointly incorporate a frozen and a learnable visual encoder in few-shot learning with CLIP? Is any of the mentioned literature doing so? If not, what’s the gain of this design and what’s the learnable-parameter vs. accuracy trade-off? 2) Is there any changes in the prompt group compared with the literature? Authors claim to identify outliers with K-means. However, prompts like `a toy {c}` and `the plastic {c}` cannot describe most of the images and introduce outliers and it’s easy to simply just remove these manually designed prompts.
-- Also, in section 2.1, the authors mostly introduce the datasets instead of introducing and discussing the few-shot learning literature. Looks like it can be moved to the experiment section instead of related work.
-- In the proposed framework, the authors conduct Isolation forest as well as KMeans to eliminate outlier features, I wonder whether it slows down the training and inference.
-- Minor: the scores for Zero-shot CLIP in tables 1 and 2 are not aligned well.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Lack of clear visualizations to show how causal information is disentangled from spurious information. Better class centroids do not indicate the effect of disentanglement. Similarity maps on both learned causal and spurious representations are encouraged to be shown.
A1: Figure 1 of the paper visualizes the features of the same category under different domains. In order to better demonstrate the effectiveness of our motivation and the effect of disentanglement, we visualize the features of different categories under different domains. As shown in **Figure 2 in the attached PDF of "global" response**. We employ t-SNE to visualize distinct categories of image features within the contexts of both cartoon and sketch representations. This enables us to estimate the distribution of each category, delineated by an elliptic region. CLIP exhibits the capability to discern nuanced variations across diverse domains and categories. Subsequently, we fine-tuned the model on miniImageNet's training set. On sketch images with large differences in domain from miniImageNet, **the distinction of the feature extracted by fine-tunes model between the two categories deteriorates**. And the **distinction between domains becomes less obvious**. While our method can guarantee the differentiation between categories with larger differences and between domains.
Q2: What if employing a small off-the-shell saliency model to first mask the spurious information?
A2: Thanks for the question. Although the concept of utilizing a saliency model for masking spurious information seems align with image-related tasks, its applicability largely depends on the specific context. **Since the spurious information may be different across tasks, directly using such masking approach may get worse performance**. For instance, image background is usually considered as spurious information and should be masked, while it could also be useful information in some tasks. Paper [1] reports a worse out-of-distribution (OOD) performance when it employs a similar masking approach. In contrast, our method can be applied beyond "object" recognition. In particular, **employing a saliency model on datasets link eurosat is challenging due to the intricate task of satellite image "terrain" classification**. By utilizing non-contextual information from the text and constraining the model's ability to identify spurious information during fine-tuning, we ensure the preservation of model robustness, preventing the degradation of performance.
Q3: Lack of meaningful discussions on related works.
A3: Thanks for the suggestion. Due to space constraints, we have updated the content in global response.
Q4: Is it a common practice to jointly incorporate a frozen and a learnable visual encoder in few-shot learning with CLIP? Is any of the mentioned literature doing so? If not, what’s the gain of this design and what’s the learnable-parameter vs. accuracy trade-off?
A4: In scenarios with ample training samples, a larger number of parameters tends to correlate positively with heightened accuracy. Conversely, in few-shot learning, it's easy to overfit due to too many parameters and not enough data. Consequently, a prevailing trend among existing methodologies is to mitigate the number of learnable parameters. Within the few-shot learning paradigm, enhancing performance through an augmented parameters in the network while concurrently preventing overfitting proves to be challenging. It is worth noting that while existing methods have demonstrated better performance, few have attempted to exploit the potential for fine-tuning the backbone. Accordingly, this paper addresses this challenge by fine-tuning the backbone in few-shot learning to make it less fitted and improve performance. The experimental results in the following table also demonstrate that applying our fine-tuned backbone directly to existing methods can effectively improve the performance.
|METHOD|1shot|2shot|4shot|8shot|16shot|
|-|-|-|-|-|-|
|Tip-Adapter[3]|64.11|64.36|64.63|65.17|65.49|
|Tip-Adapter+**FD-Align**|64.51|65.33|65.76|66.79|67.28|
|Tip-Adapter-F[3]|64.64|65.18|65.78|67.21|68.43|
|Tip-Adapter-F+**FD-Align**|64.86|65.61|66.11|67.58|68.70|
|APE[4]|65.36|65.69|66.00|66.55|66.55|
|APE+**FD-Align**|66.71|67.29|67.40|67.76|67.69|
|APE-T[4]|65.89|66.18|66.82|67.99|68.74|
|APE-T+**FD-Align**|66.84|67.37|67.81|68.73|69.15|
Q5: Is there any changes in the prompt group compared with the literature? Authors claim to identify outliers with K-means. Why not delete prompts like a toy {c} manually
A5: In order to ensure a fair comparison, we employ the identical prompts as presented in the original CLIP papar.
It should be corrected that we are using Isolation Forest, not k-means to remove outlies. K-means is employed to mitigate an overabundance of similar prompts, thereby preventing an undue emphasis on a particular context. Since we can't tell the validity of each prompt, there may be cases of mistaken or missed deletions. Consequently, we opt to employ Isolation Forest for the purpose of excluding irrational prompts, as opposed to manual deletion. As an illustration, in Tip-Adapter[3], the prompts after manual filtering by the author contain "itap of a {c}." However, this prompt is not reasonable for our task and leads to performance degradation.
Q6: Related work in Section2.1 and scores in tables 1and 2.
A6: Thanks for your suggestion, we will move it to experiment in the revised version.
Q7: Do Isolation Forest Kmeans slow down training?
A7: No, Isolation Forest and k-means are only run during training initialization. So it doesn't affect training and inference speed.
[1] Masked images are counterfactual samples for robust fine-tuning.
[2] Conditional Prompt Learning for Vision-Language Models
[3] Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling
[4] Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement
[5] Visual Prompt Tuning
---
Rebuttal Comment 1.1:
Title: We would be grateful if you could take a look at the response
Comment: Dear Reviewer EKx5,
Thank you for reviewing our paper. Just a friendly reminder that **the author-reviewer discussion will close soon**, and we eagerly await your feedback. In response to your comments, we've updated our t-SNE visualization to more clearly highlight our method's advantages and explained how it's different from existing approaches. Could you please take a look at these updates?
We're here to discuss any more questions or concerns you may have about our paper.
With warm regards,
Authors | Summary: This paper presents a fine-tuning method for pre-trained models in few-shot learning via CLIP's text and visual feature alignment capability. Specifically, the authors use text information to assist in decoupling spurious information from causal information while keeping the spurious information unchanged during the training process. This way, the authors claim the proposed FD-Align model can maintain its generalizability while effectively mitigating the overfitting problem from previous fine-tuning approaches.
Strengths: The concept of utilizing text to distinguish between causal and spurious feature extraction for enhancing the generalizability of few-shot model training is innovative. Additionally, the discovery that retaining the spurious feature extraction component unchanged pre and post fine-tuning can assist fellow researchers in addressing the few-shot learning issue is noteworthy. In addition to the originality of the concept, the results appear to be encouraging on various datasets when compared to other recent works. Finally, the authors also attached the code for easy reproducibility.
Weaknesses: The presentation and illustrated figures could be improved for better clarity. Additionally, the experimental results do not show comprehensive superiority compared to WiSE-FT, indicating some drawbacks in the proposed approaches.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: According to the authors, keeping the original ability to extract spurious features can aid in solving the few-shot learning problem because it helps reduce overfitting. In situations where the image content is straightforward, identifying the correlation between the background and foreground can be a valuable tool for recognizing unfamiliar objects. For example, an airplane is commonly found in the sky or at an airport rather than in a mountainous environment. However, as image content becomes more complex, it is unclear how much background information is still helpful for this task. Additionally, causal/spurious feature disentanglement appears to be reliant on the quality of the prompt, which can introduce uncertainty into the performance since different individuals may provide different prompts.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors utilized the underperformed scores to explain the limitation of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: The presentation and illustrated figures could be improved for better clarity.
A1: Thanks for your feedback. In response to your suggestion, we have revisited and updated the visual representation of Figure 1. In the modified figure, the data flow and the loss term during the fine-tuning can be shown more clearly. For a more lucid depiction, please refer to **Figure 1 in the attached PDF of "global" response**. We believe the revised figures will offer better clarity and ease of comprehension.
Q2: The experimental results do not show comprehensive superiority compared to WiSE-FT.
A2: Thanks for bringing this to our attention. Upon careful examination, we determined that the minor superiority of our initial experimental results compared to WiSE-FT[2] was due to inadequate hyperparameter tuning. In fact, after re-tuning the hyperparameters of our method, a significant enhancement in performance can be observed. The updated results are shown in the table below.
| METHOD | 1 shot | 2 shot | 4 shot | 8 shot | 16 shot |
|:---------------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| CLIP(60.33) | | | | | |
| LP-CLIP [1] | 22.17 | 31.90 | 41.20 | 49.52 | 56.13 |
| WiSE-FT [2] | 63.36 | 64.27 | 66.16 | 68.13 | 69.55 |
| Tip-Adapter [3] | 65.52 | 67.40 | 69.56 | 71.58 | 72.72 |
| FT | 65.24 | 67.42 | 69.85 | 71.73 | 73.00 |
| **FT+FD-Align** | **65.65** | **68.26** | **71.25** | **73.63** | **75.59** |
Q3: It is unclear how much background information is still helpful as image content becomes more complex?
A3: In few-shot learning, when the content in an image is more complex, it is more difficult for the model to learn the task-relevant causal information and overfit to task-irrelevant spurious information such as background. Our method can effectively avoid the model overfitting to task-irrelevant spurious information. The more complex the background information is, the more effective our method is.
Q4: Causal/spurious feature disentanglement appears to be reliant on the quality of the prompt, which may lead to uncertainty in performance.
A4: Thank you for your insightful question. Indeed, the performance will be influenced by the quality of the prompt. However, we would like to clarify that our method has conscientiously taken steps to mitigate this dependency. Firstly, our approach leverages an effective outlier removal algorithm (i.e., Isolation Forest) to eliminate impractical prompt instances, contributing to a more stable process and diminishing the influence of prompt quality. Secondly, we employ the k-means clustering to reduce redundant prompts, thereby ensuring a representative set of data. When sufficient prompts are available (e.g., large pre-trained language models like GPT can easily offer ample relevant prompts), the above techniques can reduce the impact of prompt quality on performance uncertainty to a minimal level. It's worth emphasizing that **an increase in the number of prompts does not adversely affect training time**, as context weight generation is executed solely prior to the training procedure. We hope this explanation addresses your concern and provides a clear understanding of our approach. We will add the above discussion into the revised paper.
[1] Learning Transferable Visual Models From Natural Language Supervision
[2] Robust fine-tuning of zero-shot models
[3] Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling | null | null | null | null |
Data Market Design through Deep Learning | Accept (poster) | Summary: This paper studies the market design problem, specifically for data markets. In particular, different from existing analytic approaches, the proposed approach is based on (deep) learning to recover/discover market designs. They adopt and extend an existing RochetNet architecture to both single- and multi-buyer setting and empirically demonstrate the effectiveness of the approach in recovering/discovering the market design.
Strengths: - The paper studies the problem of market design and it is relevant for data market.
- The proposed learning-based approach is interesting in that it can recover some analytic solutions.
- There are relatively extensive empirical results.
Weaknesses: - The motivation and justification of a (deep) learning-based approach can be made stronger.
In lines 40-42, "The difficulty of using analytical tools for this problem of data market design is highlighted by this example, and it remains an open problem to obtain theoretical results for richer multi-buyer settings. This motivates the need for computational approaches." While it is perceived that analytic solutions are difficult, and computational approaches seem a viable alternative. Is it really necessary to use deep learning? In other words, are there less complex computational approaches that can be tried first or reasons why they would not work as well?
In particular, (how) can the assumption of i.i.d. samples from $\mathcal{P}$ for training the deep learning model be satisfied? It requires the type of the buyer (i.e., both belief and the $v$) to remain fixed throughout observing the signals. Does this assumption have conflicts with "Upon receiving a signal, the buyers update their prior beliefs and choose an optimal action accordingly" (lines 143-144)?
- The inline equations in the paper can break the flow of the writing and make it more difficult for the reader to catch the most important points.
For instance, equations (1)-(4) are used to discuss (different variants of) incentive compatbility. It is not so clear which equation the reader should pay most attention to. Furthermore, it seems that equation (4) (i.e., ex post incentivie compatible) is not interpreted after the equation.
- Some experimental results can be difficult to interpret (or understand their significance), due to the lack of (existing) analytic characterization of optimum solution.
For instance, in lines 294-296, "We are aware of no theoretical characterization of optimal data market designs when both $v$ and $\theta$ vary. In such cases, we can use RochetNet to conjecture the structure of an optimal solution." As a result, it is not clear to the reader how to understand whether the proposed method is effective. It further goes to the first point regarding the motivation/justification of a learning based approach: There lacks a solution or ground truth (i.e., analytic optimum or approixmate optimum) to evaluate the approach. Hence, it seems appealing to first establish such a solution before a computational approach, otherwise, how to effectively evaluate the proposed computational approach?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - In lines 20-22, "... hold vast quantities of data about individuals. In turn, this has led to data markets, where information about an individual can be purchased in real-time to guide decision-making (e.g., LiveRamp, Segment, Bloomreach)." This seems to hint at that the aforementioned companies are selling data about individuals, is it what it means?
- In lines 60-62, "Further, we give a training method that enables the efficient reuse of computed interim allocations and payments from other samples to swiftly calculate the interim utility of misreporting, dramatically speeding up training." Is this empirically or theoretically demonstrated, specifically about "dramatically speeding up training"? What is it comparing against, in terms of speed of training?
- In line 122, "The state of the world, $\omega$, is unknown and is drawn from a finite state space ... " Is there an assumption on the distribution of this?
- In line 127, "where each $v_i$ is drawn independently from a distribution $\mathcal{V}_i$". What is the interpretation of $v_i$ and what does the distribution $\mathcal{V}_i$ depend on?
- In lines 137-138, it seems that the negative externality is in the form of decreasing payment for one buyer $i$ as the gain for some other buyers. In other words, if another buyer $j$ gains (in ex post payoff), this buyer $i$ "loses" (i.e., has a lower utility), is this correct? How should this be interpreted in an example?
- In line 139, "There is a data seller who observes the world state ... " How to justify or realize this assumption that the actual world state is exactly known by the data seller?
- In line 159 (5-th bulletin point), "$u_i(a,\omega, V_i, \theta_i)$", is it meant to be $V_i$ or $v_i$?
- In line 192, "... an unsupervised learning problem." Is it referring to optimizing the softmax version of Equation (9)? If so, it looks more like an optimization problem (i.e., parametric fitting) instead of a learning problem. Often, unsupervised learning is to learn about the inter or intra structure of the data instead of to fit a functional form. Please help interpret why the loss function in line 222 is an unsupervised learning problem.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - Typically in an optimization approach, if the objective is non-convex (or more complex), it is difficult to establish theoretical guarantees in terms of the optimality or quality of the final solution obtained. This is also mentioned by the authors in lines 374 - 375. The implication is that, it is difficult to obtained a principled understanding of how good the solution (i.e., learnt market design) is, obtained from the gradient-based optimization.
- With regard to lines 378-380, "we return to where we started, and underline that markets for trading data about individuals raise a number of ethical concerns." In light of the potential ethical concerns of data trading, a (deep) learning-based approach potentially makes it even more difficult to manage and parse the working mechanism of the data trading. As a result, such an approach can make it even more difficult to reliably/verifably address those concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Justification for using Deep Learning:** Please note that the goal here is to learn a differentiable _function_ that represents an entire mechanism, i.e., a complete mapping from all possible inputs to all possible outputs. The goal is not to simply find a pointwise output for a given input because incentive considerations of mechanism designs rely on reasoning about the global properties of a learned function. Tools from deep learning are very well suited to these kinds of tasks since they are highly configurable and can learn complex and non-linear patterns. In our setting, once we have a differentiable representation of a mechanism and loss function, we can use standard machine learning pipelines to optimize these representations easily (“computation as learning”). As noted, the other added advantage of using a deep learning based approach is the flexibility in accommodating different design requirements.
In regards to the other computational approaches, the most relevant, existing computational method for this data market design problem is Cai and Velegkas (2021), but this focuses on discrete inputs and makes use of linear programming, and does not consider continuous inputs. Handling continuous inputs through discretization and then optimizing for discretized inputs through linear programs (LPs) has been demonstrated to lead to significant scalability issues in the context of computational approaches to the design of optimal auctions Dütting et al. (2019).
In this work, we follow the agenda of differentiable economics and leverage the expressive power of deep neural networks to discover designs that are approximately optimizing and approximately incentive aligned. These approaches have worked well for the settings of auction design, and also two-sided matching problems and problems of social choice, and in this work, we demonstrate that they also perform very well for the design of data markets – with suitable extensions to handle the new behavioral considerations.
**i.i.d assumption:** In this model, all the buyers have a prior belief over which state is likely to occur, and the prior and the payoff for taking the correct action constitutes a buyer’s type. Each buyer then reports their type to the data seller.
From the revelation principle for dynamic games (Myerson 1991), it's sufficient to restrict our attention to mechanisms that are IC (where players report their true types). It’s these types that constitute the training samples from our mechanisms, and thus the assumption of iid samples is satisfied.
A buyer then updates its beliefs if they choose to buy the experiment: they’re sent a signal based on the experiment that they purchase, and this signal is used to update their prior belief. This happens after they’ve taken part in the mechanism, and after the statistical experiments have been sold. Please refer to Ln 152-160 for an overview of the timing of the mechanism.
**Inline equations:** Thanks for pointing this out. We will make the necessary changes to make it more readable.
**Motivation for a learning based approach:** We think about this a little differently – we’re inspired by the challenges with developing theoretical solutions in these multidimensional settings and are looking to the use of computational tools to attack this problem. Can we use this kind of framework to generate interesting conjectures that can then be proved?
This has been a successful pipeline across a number of discovery-based, scientific domains, with in silico design leading to tests in the lab (here, through “the lab” would be “proved” or “tested in the experimental economics lab,” and perhaps “in simulation with AI participants” in the future). One of the main motivations of our work is to provide economists with a new computational tool with which to test conjectures and analyze the structural properties of optimal designs.
---
**Companies and Data:** Indeed, these companies are all customer data platforms which unify first-party customer data from multiple sources to build a single coherent consumer profile. Liveramp, for example, onboards data by matching offline files, names, addresses to digital identifiers so that companies can target users with online ads. Based on this, it aggregates information for a person’s activities from different browsers, devices, and channels that represent a consumer’s digital footprint and provides access to these data via its marketplace.
**Regarding Speedup:** We will add a note about this. In existing BIC networks, for a batch size B with K samples for computing interim values and M misreports, we will have to compute B x K x M forward passes. However, in our approach, we don’t sample new misreports but rather re-use other data points from the minibatch as misreports, thereby doing only B x K forward passes.
**Assumptions and Explanations:** Some of the assumptions made here are standard in the mechanism design and information design literature. For better clarity, please refer to the practical example discussed in the response to Reviewer u4po [here](https://openreview.net/forum?id=sgCrNMOuXp¬eId=KNtPhdCieC)
**Typos:** Thanks for pointing this out. It is $v_i$
**Unsupervised learning problem:** We think of this as unsupervised learning because there are no “ground truth labels,” which in our setting would be examples of how to allocate and price information coming from an optimal mechanism. In fact, the optimal mechanism is a priori unknown in many of the settings studied, and this is a problem of discovery of new mechanisms. Instead, the pipeline that we formulate is used to optimize for revenue subject to IC constraints, and we look to interpret the optimized designs, and introduce new conjectures.
---
**Non-Convexity:** Please refer the global comment [here](https://openreview.net/forum?id=sgCrNMOuXp¬eId=KNtPhdCieC).
**Ethical Concerns:** Please refer to the subcomment.
---
Rebuttal Comment 1.1:
Title: Regarding ethical concerns
Comment: Indeed, there are important ethical concerns concerning markets for trading data about individuals, and we can use the additional page in the camera-ready copy to give an expanded discussion.
Perhaps most interesting, we expect that the techniques introduced in this paper can also easily be extended to identify market designs that make additional, explicit tradeoffs between welfare and concerns regarding user welfare, including privacy. This can be done by modulating the loss function to incorporate additional considerations, for example, providing a continuous tradeoff between revenue and user (privacy-based or otherwise) welfare. This seems very interesting for future work.
In light of the inherent importance of privacy considerations, properly designed and functioning data markets could be an improvement over the many “hidden markets” in the present day, where the quid pro quo trade, for example, between preference information and free content or services, may not be clear to users.
We also note that data markets for selling information already exist and are typically subject to regulatory control and that markets for data will likely continue to exist as an important part of the business landscape. Thus, they are worthy of study, and thorough computational approaches that can afford additional considerations from reality relative to theoretical approaches are necessary.
---
Rebuttal Comment 1.2:
Title: Reponse to rebuttal
Comment: I thank the authors for their detailed feedback and response. Most of my concerns are addressed, so I will raise my rating to 5.
Some additional notes:
- I do think the deep learning approach is interesting, despite my concerns were over its jusitification as a design choice. I recommend the authors make clear in their revision what they have discussed in the rebuttals (specifically why deep learning, and what it is used for).
- Regarding the term "unsupervised learning", I am still a bit unsure whether it's the most apt choice, primarily because unsupervised learning has a conventional meaning attached to it for the machine learning community. Since you do adopt an optimization objective, the name could be something related to optimization and specific to your setting. This is a suggestion. | Summary: This paper introduces a deep learning application to the data market designs that find optimal signaling schemes to maximize the revenue of data sellers. The proposed method is designed to handle truthfulness and obedience (i.e., buyers following recommendations). The overall approach follows the prior frameworks of RochetNet and RegretNet for auction design. The authors are able to demonstrate the method’s ability to recover existing analytical optimal solutions and extend to cases where analytical results are not available. Some experimental results are provided for both single-buyer and multiple-buyer settings.
Strengths: 1. The paper applies deep learning to the new domain of data market design, illustrating the feasibility of learning solutions to optimal data market design.
2. It considers the obedience of data buyers in the design. This makes the approach more practical.
3. The paper provides a sound analysis of Individual Rationality for the mechanism and payments.
Weaknesses: 1. The writing could be improved. Preliminaries could be better structured to explain essential terms like menu entry, signaling, state of the world, how the mechanism works, etc. Interpretations could be added after Lemmas and computation equations (e.g., (10)) to improve clarity.
2. The scales of the experiments are not large enough to be convincing. If larger experiments are not possible, challenges and limitations should be clearly stated.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: **Major**
1. Are there any references to support the assumptions made in the preliminaries section? For example, why is the matching utility payoff reasonable in data market design? How do you interpret that in the binary-state setting in the real world? How about a more complex non-binary setting?
2. For the single buyer setting Lemma 3.1, it is claimed that the mechanism is Incentive Compatible as it is agent optimizing. Why is it agent optimizing when the objective is to maximize the payment by the agents?
3. How to access the validity of the results from the networks when there is no analytical solution (more complex settings)? For example, for the price of 0.14 outputted for setting C, how do you know whether it is close to optimal? Also, could you provide a more intuitive interpretation of the price and results?
4. What are the challenges in conducting experiments on binary states, actions? Also, can you perform experiments on more than two buyers? Can the method be extended to much more complex scenarios with a large number of players, actions and states?
**Minor**
5. Grammar. Lines 80, 103, 242. Punctuations and formats: Lines 146, 153-160, 239.
6. Some notations can be confusing, especially the subscripts, superscripts and brackets.
7. What is $\Delta$ in Line 129, never explained before.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have sufficiently discussed the limitations of the approach in the limitation section. Additionally, I wonder how well this framework applies in real-world scenarios. Could the author clarify the limitations of adopting the method in real life for data pricing, or provide a practical example/application?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the feedback regarding the clarity. We will make the necessary edits and add interpretations to our lemmas and equations to make them more readable. The response to the questions including concerns regarding the scale are addressed below
---
**1. Matching Utilities under binary states**
Our focus in this paper is on the model formulated by Bergemann et al. (2018) and Bonatti et al. (2022), where the assumption of matching utility under binary states is widely used. Under this assumption, the buyer faces *binary* payoffs in each state where the outcomes can be nicely categorized into ‘right’ and ‘wrong’. For this case, the restriction is without loss of generality relative to a general payoff matrix. There, for every state (say state 1), we can always subtract the (state-dependent) constant of the utility of taking the non-matching action (action 2) under the current state (state 1). This linear transformation normalizes the payoffs of the data buyer by setting off-diagonal utility to 0 without affecting the optimality conditions of the data buyer's decision problem. For more details, see the matching utility section in Bergemann et al. (2018). We also discuss a practical example in the last subsection of this comment.
Furthermore, our choice to primarily focus on matching utility is because this is assumed in the results of Bergemann et al. (2018) and Bonatti et al. (2022), which we use as baselines for a comparison of learned and analytically optimum results. The proposed RochetNet and RegretNet formulation only needs a properly specified utility function to work and is readily able to handle custom utility values. The framework can also be easily extended, without new challenges, to a non-binary state problem.
**2. Clarification regarding agent-optimizing**
We apologize for the lack of clarity. This concept of “agent optimizing” is from the economic theory of mechanism design, and we will add a reference. In particular, it requires that each agent is presented with a menu of options that does not depend on their report, and where their report is – in effect – used to pick an option that maximizes their utility. This is a theoretical framing that allows one to reason about incentive properties. Given this as a design constraint, we can then seek to maximize expected revenue, subject to this property of “agent optimality.” In the present paper, this “agent picking the best option” is hard-coded in the RochetNet architecture in a differentiable way.
**3. Validity and Interpretation of results**
For the *ex post IC* setting for the multi-bidder case with uncertain payoffs, we conjecture the structure of the solution and prove its optimality.
For other settings, we conjecture but do not prove the structure of the optimal solution. For Setting C, the conjectured optimal design is a menu of size one that offers a fully informative experiment. For Setting D, we conjecture that the optimal design is a menu of size two with one fully informative experiment and one partially informative experiment. Both these results provide a new target for economic theory.
**4. Scaling up**
We've addressed this in the global comment [here](https://openreview.net/forum?id=sgCrNMOuXp¬eId=OyHh7ioeDB).
**5, 6, 7. Typos and Clarifications**
Thanks for pointing this out, and we will address it. The triangle denotes the probability simplex (over states in Ln 129)
---
**Practical Example and Challenges**
We adapt the following real-world example from Bonatti et al. (2022)
Consider a platform similar to Amazon that wishes to monetize selling information about a consumer, like their shopping history, with retailers. The success of each retailer is influenced by two main things: (i) how accurately they can target their ads to consumers, meaning they show the right products to the right people; and (ii) how unique their ads are compared to what other competitors are offering, based on what consumers like.
Retailers decide how much they're willing to pay for extra information based on how much profit they make from each sale, considering their costs. As this is only privately known to the merchant, the platform must elicit it through its choice of mechanism. This reduces to designing a menu of (experiment, payment) pairs, each corresponding to an advertising campaign.
In this setting, there are two states $\Theta$ = {0, 1} representing consumer preference. There are 2 retailers (indexed by $i$ and $j$) whose goal is to match their product to a consumer’s preference. Their action sets are thus given by $a_i = a_j$ = {0, 1}. The payoffs for retailer $i$ is given by $1_{\theta = a_i} - \alpha \cdot 1_{\theta = a_j}$ where $\alpha$ controls the competition factor. The competitor $j$ having correct information about a consumer increases the competition and imposes a negative externality of $\alpha$ in this case.
*Challenges* - As discussed in Bonatti et al. (2022), this model abstracts away certain details and dynamics of online advertising. For example, the model does not capture the cost of transmitting data, and nor does the model explain a practical style by which partially correct information can be communicated.
---
*References*
[1] Bergemann, D., Bonatti, A., & Smolin, A. (2018). The design and price of information. American economic review, 108(1), 1-48.
[2] Bonatti, A., Dahleh, M., Horel, T., & Nouripour, A. (2022). Selling information in competitive environments. arXiv preprint arXiv:2202.08780.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: 1. While it is claimed that the choice of “binary states/actions” and “matching utility” is not a necessary assumption for the framework (only adopted due to easy comparison with existing analytical works), no attempts to show the effectiveness of the proposed method on non-binary state problems are made. It is hard to be convinced without concrete support such as experiments.
By the way, you mentioned about 10 agents and 10 states experiments in the global response, could you report the results for this experiment? Also, is there no way to assess the quality of the results (since you say visualizing + conjecturing is the way but we cannot do so in this complex case)?
2. Thanks for the clarification on “agent optimizing”, yes, please add a reference.
3. I am not entirely sure about the validity of the conjectures you have made. Further, could you explain about the “**price**” in your experiment outputs? If only one menu option is given, shouldn’t a higher price yield better revenue, which is the overall objective? What should be the optimum?
Thanks for the real-world example, too!
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you for engaging us on this. We are sharing additional results in support of both scaling-up and models involving non-matching utility. We will also use the additional page in the final version of the paper to add this analysis. We break the results into those for the _ex post_ IC setting and the Bayesian IC (BIC) setting.
### Scaling
For the results on scaling, check the comment [here](https://openreview.net/forum?id=sgCrNMOuXp¬eId=yemOZcWgh5)
### Non-matching utilities
Extensions to the results in the current paper to handle the case of non-matching utilities are simple because this only involves changing the values in the payoff matrix.
As an illustration, we report here on a replication of the results for a setting with non-matching utilities that is reported in Bergemann et al. [1]. They make a case for complex designs, showing that a seller offering a single, fully-informative experiment can achieve only an O(m) approximation to optimal revenue in settings with $m \geq 3$ actions. We illustrate this in the case of a single buyer, with binary world state, 4 actions, and prior belief $(\theta, 1 - \theta)$ with $\theta$ sampled from a uniform [0, 1] distribution, and payoffs given by this matrix $\mathcal{U} = [[1,0.9, 0.6, 0],[0,0.5,0.7,1]]$.
| **Menu** | **Revenue** |
|------------------|:-----------:|
| Fully Informative | 0.111 |
| RochetNet | **0.119** |
By training a mechanism with our pipeline, we confirm that selling a single, fully-informative experiment yields lower revenue than the menu learned by RochetNet, which involves selling a fully-informative experiment along with a partially-informative experiment with $\pi =$
[[ 0, 0, 0, 1], [ 0, 0, 0.6, 0.4]].
### Why we can't simply increase the price
In this work, we seek to maximize revenue while incentivizing truthful behavior from buyers. Increasing the price associated with an experiment involves a tradeoff. On one hand, improving revenue when selling the experiment, while on the other hand this may push buyers away from the experiment in favor of purchasing another experiment or declining to purchase information altogether.
To illustrate this tradeoff, consider the following simple scenario: the matching utility case and a single buyer whose payoff is uniform on [0, 1]. In this case, the optimal (_ex post_) IC mechanism offers a fully informative experiment at a price of 0.5. The buyer opts to purchase if the expected value of information from the experiment is larger than 0.5. Given the buyer’s prior, this happens with probability 0.5, resulting in a revenue to the seller of 0.25. However, at a higher price of 0.6, the probability of purchase falls to 0.4, giving lower revenue of $0.6 \times 0.4 = 0.24$.
### Validity of Conjectures
We refer to Appendix C.2 for additional experiments where we study how the differential informativeness of experiments varies with properties such as the distribution of belief types of buyers, including the precision of buyer beliefs. Understanding this kind of structure between economic primitives and optimal market design is interesting within economic theory, as demonstrated by [2], who examined this setting for discrete types and established analytical solutions. In our experiments, we develop support with this new framework for analogous conjectures in the setting of continuous types. For additional illustrations, our framework also allows us to identify scenarios where a more sophisticated menu can outperform a market that sells a single, fully-informative experiment — in Section 5; we showcase instances where our framework learns to sell an additional, partially-informative experiment to boost revenue. Again, this provides a target for economic theory.
---
*References*
[1] Bergemann, D., Cai, Y., Velegkas, G., & Zhao, M. (2022, July). Is Selling Complete Information (Approximately) Optimal?. In Proceedings of the 23rd ACM Conference on Economics and Computation (pp. 608-663).
[2] Bergemann, D., Bonatti, A., & Smolin, A. (2018). The design and price of information. American economic review, 108(1), 1-48. | Summary: The authors are concerned with a problem of "data market design". In such a setting, a mechanism designer with access to an unknown world state interacts with buyers who have private types, and need to take actions whose payoffs vary depending on the world state. These buyers purchase (in the single buyer case) or bid on (in the multi-buyer case) access to a signaling scheme which, given reports from the agents and the world state, sends a signal to the buyers (which without loss of generality can just be a recommended action). This mechanism, treated as a direct-revelation mechanism, needs to be both truthful (incentivizing honest reporting by the buyers) and obedient (once the buyers receive their signal, they should be incentivized not to deviate from the recommendation). Subject to those constraints (either Bayesian or ex post), the mechanism designer wants to maximize their revenue.
This problem shares some similarities to truthful revenue-maximizing auction design. In that domain, there has been recent progress using the tools of "differentiable economics" to approximately learn high-performing (and sometimes even provably optimal) auctions, in both single- and multi-bidder settings.
The authors apply very similar techniques to this data market problem. In single-buyer settings (as in auctions) they are able to ensure exact IC; for multi-buyer settings they use a Lagrangian during training to approximately enforce IC constraints. They experiment on a relatively wide variety of problem instances, reproducing known results, finding new optimal mechanisms, and conjecturing optimal mechanisms where they cannot find them.
Strengths: The paper comprehensively shows how to successfully apply differentiable economics to a new domain where it has not previously been applied. The authors are able to reproduce optimal mechanisms and find new ones, showing that their adaptation of these technique is in fact useful in producing novel results. This helps to further push these techniques towards being practically helpful tools for theorists and modelers.
Weaknesses: The network architectures here are essentially the same as those used in previous work for auctions, only adapted slightly for the data market setting. This is fine, but it does mean that from the perspective of differentiable economics, there is no novel methodological contribution.
The experiments appear to consider at most 2 buyers. While (as in the case of multi-parameter auctions) even selling to just two buyers may be a very challenging case, it would be more interesting to consider a slightly larger number of buyers.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: Can the method in fact scale to larger (even just 3-5 buyers) settings, or not? This should be discussed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: See questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the review and the feedback. We've discussed concerns regarding the weaknesses - especially regarding **Methodological Contributions** and **Scaling up** in the global comment [here](https://openreview.net/forum?id=sgCrNMOuXp¬eId=OyHh7ioeDB).
---
Rebuttal Comment 1.1:
Title: Acknowledging rebuttal
Comment: Thanks for your response here. I acknowledge that the BIC techniques do constitute a new methodological contribution, although this is not the main point of the paper. Thanks also for checking that scaling up can generally work, at least for DSIC. My general conclusion about this paper has not changed. | Summary: This paper introduces a deep learning framework for the automated design of data markets, a novel and timely application in the field of economics. The authors address the data market design problem, which involves designing a set of signaling schemes to maximize expected revenue. The paper extends previous work on deep learning for auction design by learning signaling schemes and handling obedience constraints that arise from the actions of agents.
Strengths: - Innovative Application: The paper introduces a novel application of deep learning to the data market design problem, expanding the scope of machine learning in the field of economics.
- The paper is well-written overall.
Weaknesses: -Incremental work: It seems that the core contribution, the proposed neural network architecture, is a simple extension of existing model called RochetNet, by slightly modifying the loss function.
-Lack of comparison with baselines: mechanism design for information acquisition is a long standing problem. I was surprised to see no baseline comparison in the experiments, and no discussion on how/why existing approaches may not work in the methodology.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What are some baseline methods to compare with? For example, how does the standard rochetnet perform on the proposed market settings?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Novelty**
We've addressed this in the global comment [here](https://openreview.net/forum?id=sgCrNMOuXp¬eId=OyHh7ioeDB).
---
**Comparison with other baselines**
We are unaware of other computational baselines, and we already compare them with the theoretically optimal designs from [1, 2] where they are available. In terms of prior computational results, we primarily deal with continuous distributions, whereas [3] provides computational results for discrete types. Handling continuous distributions through discretization using linear programs (LPs) is understood to lead to significant scalability issues, as demonstrated for optimal auction design [4].
We also discuss some alternate data market formulations in the related work section. But these are different problems, and solution approaches to these problems do not provide baselines for the problem formulation that we take up.
RochetNet cannot be used for the data market setting, since this departs from optimal auction design in several substantive ways. The incentive compatibility constraints under the data market setting are more complex, comprising both the truth-telling part (reporting one’s type truthfully) and the obedience part (taking the recommended action instead of deviating to another action). If we adopt the standard RochetNet idea from the auction setting, we will miss the obedience part, assuming agents act according to our action recommendation when they might deviate to other actions strategically instead.
Problems of information design, particularly “Bayesian Persuasion,” have been studied in both economics and computer science, but this is a quite different setting from that of the present paper. In particular, it does not model problems of revenue, bidding, or multiple receivers (or competition across multiple receivers).
Problems of optimal auction design are also widely studied in computer science and economics, but very different in structure to the data market problem; e.g., much of the auction literature considers the sale of one or more “rival good” (i.e., a good that cannot be allocated to multiple people, as is the case with information). Moreover, the auction design literature does not deal with posterior beliefs, arising from information sale, and effect of beliefs on downstream actions.
---
*References*
[1] Bergemann, D., Bonatti, A., & Smolin, A. (2018). The design and price of information. American economic review, 108(1), 1-48.
[2] Bonatti, A., Dahleh, M., Horel, T., & Nouripour, A. (2022). Selling information in competitive environments. arXiv preprint arXiv:2202.08780.
[3] Cai, Y., & Velegkas, G. (2021). How to sell information optimally: An algorithmic study. 12th Innovations in Theoretical Computer Science Conference (ITCS 2021), pages 81:1--81:20.
[4] Dütting, P., Feng, Z., Narasimhan, H., Parkes, D., & Ravindranath, S. S. (2019, May). Optimal auctions through deep learning. In International Conference on Machine Learning (pp. 1706-1715). PMLR. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their helpful feedback. Three main comments are regarding non-convexity of our formulation, novelty and scaling up. We address these concerns here. The remaining questions and concerns are addressed in the individual responses.
---
**Non-Convex Formulations and Alternate Optimization Methods**
Although local optima can be a concern when optimizing non-convex objective functions, gradient-based methods have demonstrated remarkable success, with extensive empirical evidence showing convergence to high-quality solutions and even in the presence of non-convexity. There is also theoretical support for a “no local optima” phenomenon in the ML theory literature. In addition, a recent paper [1] also shows how the local optima are connected by a path, where the revenue along the path is at least as much as one of the endpoints, justifying the empirical success of RochetNet for auction design. Although open, we conjecture similar theoretical results for the adapted version of RochetNet to the present paper.
Regarding our specific experiments, one way in which we validate the effectiveness of the proposed method is by looking at settings in which theoretical results are available. In this way, we demonstrate that we can reliably recover optimal solutions for all prior settings in which analytical solutions are known (see the continuum of types setting in Section 5 and the BIC settings in Section 6)
Considering alternate computational approaches for the problem that we study, while [2] propose using LPs for settings with discrete types, we don’t expect this to extend to the continuous settings of the present paper since introducing discretization to enable the application of LPs leads to significant scalability challenges, as demonstrated in [3] in application to revenue-optimal auction design.
---
**Novelty**
We respectfully disagree that attaining the ability to train models in this pipeline is “just about modifying the loss function” from earlier work. Rather, the crucial aspect is to capture a new kind of strategic behavior within the learning pipeline. A significant innovation is to be able to handle both _obedience_ and _truthfulness_ constraints on behavior, including _double deviations_ (misreporting, along with deviating from suggested actions). To our knowledge, this is novel in the differentiable economics literature.
We also introduce new sampling techniques for achieving BIC, which are able to reuse interim allocations and interim payment calculations to compute IC violations. In existing BIC networks [4], for batch size, B with K samples for computing interim values and M misreports, we will have to compute B x K x M forward passes. By contrast, in our approach, we don’t sample new misreports but rather reuse other data points from the minibatch as misreports, thereby doing only B x K forward passes. This substantially speeds up the training process.
Another contribution is the new results we provide in the application domain itself, which is an application domain garnering considerable interest in economics and computer science and one with substantial societal importance, given the multifaceted nature of the design problem. Whereas analytical results are only available for the BIC setting, which is, in effect, lower-dimensional, and easier to analyze, we are able to study through computational techniques the design of data markets in the _ex post_ IC setting, which is a setting without existing theory. In Section 6, for the _ex post_ IC setting, we conjecture the structure of the optimal designs and prove their optimality through Myerson's framework (proof in the Appendix). We see this as an important contribution, as _ex post_ IC is a stronger notion of IC than BIC.
In addition, we provide an illustrative example to showcase the framework's versatility as a toolbox for economists. For instance, in Section 6, we study how the revenue varies as we vary the intensity of negative externality, thereby varying the competition. This hasn’t been studied before for the setting with uncertain priors, and we show economically meaningful variation, as no analytical solution is known.
---
**Scaling up**
We choose to study the case of two buyers because it is easier to visualize the results and develop conjectures. We would also like to emphasize that Theorem 6.1, which is stated for the *ex post* IC setting with uncertain payoffs, holds for any number of bidders.
The described approach can scale to more buyers and states, especially in the ex post IC settings. For instance, in the case of uncertain priors, for 10 agents and 10 states, our approach takes 62 min to run for 20000 iterations on a single NVIDIA Tesla V100 GPU.
Scaling is harder for the BIC setting, however. As we increase the number of agents to $n$, obtaining the interim values accurately involves computing the marginal values over $n - 1$ dimensions. This can be quite expensive.
We will add a discussion on scaling to the paper.
---
*References*
[1] Hertrich, C., Tao, Y., & Végh, L. A. (2023). Mode Connectivity in Auction Design. arXiv preprint arXiv:2305.11005.
[2] Cai, Y., & Velegkas, G. (2021). How to sell information optimally: An algorithmic study. 12th Innovations in Theoretical Computer Science Conference (ITCS 2021), pages 81:1--81:20.
[3] Dütting, P., Feng, Z., Narasimhan, H., Parkes, D., & Ravindranath, S. S. (2019, May). Optimal auctions through deep learning. In International Conference on Machine Learning (pp. 1706-1715). PMLR.
[4] Feng, Z., Narasimhan, H., & Parkes, D. C. (2018, July). Deep learning for revenue-optimal auctions with budgets. In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems (pp. 354-362). | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors present a novel approach to the problem of data market design, which seeks to find a set of signaling schemes, each revealing some of the information known to a seller and having a corresponding price, where the goal is to maximize expected revenue. Then, the authors introduce the application of a deep learning framework to the automated design of the data market. The paper discusses the importance of data market design and its potential applications in real-world settings, such as data marketplaces where sellers sell data to buyers for ML tasks. The authors demonstrate that their new learning framework can replicate known solutions from theory, expand to more complex settings, and establish the optimality of new designs. The paper also highlights some limitations of the approach, such as the need for interpretability of the mechanisms learned by the RegretNet approach for larger problems, the potential for local optima in non-convex problems, and the challenge of achieving exact incentive alignment in multi-buyer settings.
Strengths: + The paper presents a novel approach to the problem of data market design, which uses deep learning to automate the design of data markets.
+ The authors demonstrate that their new learning framework can almost precisely replicate all known solutions from theory, which shows that the approach is effective and reliable.
+ The paper shows that the new learning framework can be used to establish the optimality of new designs and conjecture the structure of optimal designs, which is a significant contribution to the field.
Weaknesses: + The paper acknowledges that for the approach to provide insights into the theoretically optimal design for larger problems, it will be important to provide interpretability to the mechanisms learned by the approach. However, the RegretNet approach used in the paper is not immediately interpretable, which limits its usefulness in this regard.
+ The paper notes that the approach uses gradient-based approaches, which may suffer from local optima in non-convex problems. This suggests that the approach may not always find the global optimum and may be limited in its ability to handle more complex problems.
+ The paper attains in the multi-buyer setting approximate and not exact incentive alignment, which leaves the question as to how much alignment is enough for agents to follow the intended advice of a market design. This suggests that the approach may not be able to achieve exact incentive alignment in all settings, which could limit its effectiveness.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: + Could you provide more details on how the RegretNet approach can be made more interpretable for larger problems? Are there any specific techniques or methods that could be used to achieve this?
+ Have you considered using other optimization techniques besides gradient-based approaches to address the potential for local optima in non-convex problems? If so, what are some alternative approaches that could be used?
+ What are some potential ways to provide more practical or theoretical guidance on how much alignment is enough for agents to follow the intended advice of a market design? Are there any existing frameworks or approaches that could be used to address this issue?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors acknowledge the ethical concerns raised by markets for trading data about individuals and suggest that machine learning frameworks such as those introduced in this paper can be used to strike new kinds of trade-offs, such as allowing individuals to benefit directly from trades on data about themselves. This shows that the authors are aware of the broader implications of their work and are thinking critically about its potential impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Interpretability**
We can interpret the designs learned by RegretNet in a few ways.
- We can plot the probability of recommending the correct action for different agent types. We adopt this approach and visualize these as heatmaps in the main paper (Figure 2) and Appendix (Figures 8, 9, and 10) to show different optimal designs. This kind of visualization borrows from the existing economics literature and can be extended for larger designs.
- We can compare the similarity of outputs learned by RegretNet to well-known baselines and analyze how economic properties of interest vary with different design choices. For instance, [1, 2] study how externalities affect the optimal design of data markets. In our paper, as an illustrative example, we show how we can understand revenue patterns for new settings, studying here the effect of varying the intensity of negative externality on revenue.
- Another interesting direction, not taken here but interesting for future work, is to initialize the mechanism to a known baseline, use our framework to optimize it further and see whether known mechanisms can be improved.
- Another interesting direction for future work is to consider techniques such as distillation to “compile” a design into something more interpretable. Generally, we like that this agenda is “future proof,” meaning that advances in machine learning, including distillation and interpretability, can be applied here.
---
**Non-Convex Formulations and Alternate Optimization Methods**
We've addressed this in the global comment [here](https://openreview.net/forum?id=sgCrNMOuXp¬eId=OyHh7ioeDB).
---
**Approximate Incentive Alignment**
First, we emphasize that we can achieve very small empirical regret violations (see Fig. 11 and Fig. 15 in the Appendix). We find this encouraging. Another useful observation, in support of the effectiveness of approximate incentive alignment, comes from the machine learning pipeline almost exactly recovering optimal designs for all known settings (as per the above comments).
That said, it remains an empirical question as to how much alignment “is enough.” We would expect this to depend on context, e.g., how complex it is to deviate from truthful reporting vs. satisficing behavior, how high the stakes are, and how well-informed the participants are. An interesting path forward in this regard would be to develop simulators of behavior as an additional way to test performance. There is also a growing literature on modeling complexity in the context of strategic paper, e.g., [3].
In the context of auction design, there is also some recent guiding theory [4, 5, 6] that provides transformations between $\epsilon$-BIC and BIC without revenue loss. Interesting avenues for future work include extending these transformations to general class problems — problems with both types and actions, as in the data market settings. It also remains open to extending these results to obtain $\epsilon$-DSIC to DSIC transformations for general distributions.
---
*References*
[1] Agarwal, A., Dahleh, M., Horel, T., and Rui, M. (2020). Towards data auctions with externalities. arXiv preprint arXiv:2003.08345
[2] Bonatti, A., Dahleh, M., Horel, T., & Nouripour, A. (2022). Selling information in competitive environments. arXiv preprint arXiv:2202.08780.
[3] Modibo K. Camara. 2022. Computationally Tractable Choice. In Proceedings of the 23rd ACM Conference on Economics and Computation (EC '22)
[4] Daskalakis, C., & Weinberg, S. M. (2012, June). Symmetries and optimal multi-dimensional mechanism design. In Proceedings of the 13th ACM conference on Electronic commerce (pp. 370-387).
[5] Cai, Y., Oikonomou, A., Velegkas, G., & Zhao, M. (2021). An Efficient∊-BIC to BIC Transformation and Its Application to Black-Box Reduction in Revenue Maximization. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA) (pp. 1337-1356). Society for Industrial and Applied Mathematics.
[6] Conitzer, V., Feng, Z., Parkes, D.C., Sodomka, E. (2022). Welfare-Preserving $\epsilon$-BIC to BIC Transformation with Negligible Revenue Loss. In: Feldman, M., Fu, H., Talgam-Cohen, I. (eds) Web and Internet Economics. WINE 2021.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' responses.
I am wondering in the case of malicious bidders, when most of the bidders are colluding. Would the seller be then underpaid?
---
Reply to Comment 1.1.1:
Comment: Collusion is indeed a valid concern in auction design and one that can lead to underpayment. Although our setting is more complex, which may make it harder to sustain collusive agreements between bidders, it could still be a problem in the present setting. Moreover, Collusion and bid rigging are legally prohibited in many contexts under regulations such as the Sherman Antitrust Act [2]. Considering this, we follow the majority of the auction- and mechanism design literature and focus on individual strategy proofness.
The theoretical economics literature does also formalize the concept of collusion resistance, as well as the weaker concept of _group strategyproofness_, which considers deviations by groups of bidders and differ as to whether or not they support side payments between bidders. In auction settings, the requirement of collusion-resistance is in fact very strong and essentially equivalent to requiring take-it-or-leave-it prices [3] (in the other directions, there is some theory to identify when single-agent SP is sufficient for group SP).
To our knowledge, collusion considerations have never been formally studied in the context of the design of data markets. Certainly, it would be interesting to extend our notions of regret to allow for _group regret_, suitably defined and studied in the context of our machine-learning pipeline. Indeed, this pipeline allows for intermediate definitions, for example, considering deviations by groups of a limited size.
---
*References*:
[1] Barberà, S., Berga, D., & Moreno, B. (2010). Individual versus group strategy-proofness: When do they coincide?. Journal of Economic Theory, 145(5), 1648-1674.
[2] U.S. Department of Justice. (2016). Sherman Antitrust Act. Retrieved from https://www.justice.gov/d9/pages/attachments/2016/01/05/211578.pdf
[3] A. V. Goldberg, J. D. Hartline, Collusion-resistant mechanisms for single-parameter agents, Proc. SODA '05: Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms 2005, pp. 620–629 | null | null | null | null | null | null |
On Sample-Efficient Offline Reinforcement Learning: Data Diversity, Posterior Sampling and Beyond | Accept (poster) | Summary: This paper proposes a new notion of data diversity that expands all the prior data diversity conditions in offline reinforcement learning literature. Based on this notion, the paper develops concrete value sampling-based, regularized optimization-based and posterior sampling-based algorithms with corresponding sample complexity results. The paper shows that these three algorithms can achieve comparable sample efficiency.
Strengths: 1. The proposed data diversity condition and its corresponding suboptimality guarantee seem to be general and tighter than previous diversity conditions characterizing the state-of-the-art bounds for the main streams of offline reinforcement learning frameworks. This data diversity condition is a potential inspiration for future work.
2. The writing is clear and presents a fair and clean picture compared to previous results.
Weaknesses: There are some typos and inconsistencies in the notation, such as $p_{0,h}$ in $p_{0,h}(\mathcal{F}_h^{\tilde{\pi}}(\epsilon; f\_{h+1}))$ in line 176 and $f_h$ and $g_h$ in Algorithm 2, which should be $f$ and $g$, respectively. Similar inconsistencies appear in Algorithm 3.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could you please explain how the new diversity notion is able to cover value sampling-based, regularized optimization-based, and posterior sampling-based algorithms, especially posterior sampling-based algorithms? Is it because the previous diversity conditions are not always able to cover all three types of algorithms? If the previous conditions are able to cover all three types, is the main advantage of the new notion its tightness? If the previous conditions cannot cover any of the algorithm types, could you describe the reason behind their failure and why the new notion is able to succeed?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback. We elaborate further on the data diversity notion below.
---
**Question**: **About (the advantage of) data diversity**
**Our response**: Our notion of data diversity provides a tighter characterization of distribution shift than the previous notions of data coverage and facilitates a unified framework for the analysis of the three algorithms we consider. That said, our tight(er) results (especially) for the regularized optimization-based method and the (new) posterior sampling-based method come from our refined analysis, not solely from the data diversity notion alone. The idea of the data diversity notion, which we discovered naturally and turns out to be inspired by the transfer learning literature, is to account for the error when we decouple the Bellman error under a target policy into the squared Bellman error under the behavior policy. With the data diversity notion, we showed that the scenarios for the offline data in which offline RL is learnable are enlarged compared to the picture depicted by the prior data coverage notions. With our refined analysis, we showed that the regularized optimization-based algorithm and the new (pessimistic) posterior sampling algorithm are not only provably efficient but also have comparable guarantees just as the (intractable) version space-based algorithm (under standard assumptions).
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. I will keep my belief as before.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to read our rebuttal. | Summary: This paper investigates the problem of sample-efficient learning from historical datasets in the context of offline reinforcement learning and explores the role of data diversity and posterior sampling in improving the efficiency of RL algorithms. The authors propose a new notion of data diversity and study three classes of offline RL algorithms based on version spaces, regularized optimization, and posterior sampling. They find that these algorithms achieve comparable sample efficiency, contrary to prior work. The paper also introduces a novel model-free posterior sampling algorithm for offline RL.
Strengths: This work studies offline RL problems within the context of general function approximation. The introduction of the novel notion of data diversity is intriguing, as it offers a fresh perspective on the problem. Additionally, the inclusion of the posterior sampling algorithm is interesting, as it appears to be a new contribution to the literature on offline RL theory.
Weaknesses: It is important to ensure the accuracy of the comparison with LCB-based algorithms, as the algorithm mentioned in [JYW21] may not be considered the SOTA algorithm when compared to improved LCB-based algorithms proposed in works such as [1] and [2], which have demonstrated better dependencies on d.
Additionally, it is worth noting that there has been an update in [XCJ+21], where a new arXiv version employs $\Pi^{soft}$ instead of $\Pi^{all}$. While this minor issue does not impact my evaluation, it would be beneficial for the authors to acknowledge this update and adjust the comparisons accordingly.
[1] Near-optimal offline reinforcement learning with linear representation: Leveraging variance information with pessimism
[2] Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: n/a
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback. We address all of your concerns below.
---
#### **Concern 1**: **Compare with [1][2]**
**Relevant refs**:
> [1] Ming Yin, Mengdi Wang, Yu-Xiang Wang. “Near-optimal offline reinforcement learning with linear representation: Leveraging variance information with pessimism”. ICLR 2022.
> [2] Wei Xiong, Han Zhon, Chengshuai Shi, Liwei Wang. “Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game”. ICLR 2023.
**Our response**: We will update the comparison of the instantiation of our results to the linear function class to the results of [1][2] in our revision. In this case, the bounds in [1][2] have the same dependence on $d$ as our bound and have a tighter dependence on $H$. We remark that this improved dependence on $H$ of [1][2] is due to the variance-weighted value iteration algorithm that they employ to capture different (heteroscedastic) variance of the transition kernels $P_h$ at different time steps $h$. This improved dependence on $H$ comes at the cost that the algorithm is more complicated and they need the offline data to be explorative over all dimensions of the feature map so that the estimation error of the variance can be controlled.
We remark that leveraging the variance information (using the variance-weighted value iteration) as in [1][2] is **complementary/orthogonal** to the basic algorithms we study in our paper, i.e., this idea can be added to our algorithms to improve the dependence on $H$, as we discussed as future work in Section 5.
---
**Concern 2**: **Compare with updated version of [XCJ+21]**
**Our response**: For the version space-based result, we compared with Theorem 3.1 of (the latest version of) [XCJ+21] where their bound scales with the complexity of the comparator policy class (not the soft policy class). We believe that the updated version was to correct their Theorem 3.2. For comparison in the linear case, we included the bound in their Theorem 3.2 of their updated version in Table 2 of our supplementary. For the regularized optimization-based algorithm, [XCJ+21] employed $\Pi^{soft}$ instead of $\Pi^{all}$, as we acknowledged in our Table 1. Regardless, we fully acknowledged the invention of the version space-based algorithm and the regularized optimization-based algorithm (for offline RL) to [XCJ+21] (e.g. see Line 189-193).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. However, I would like to point out that without the explorative assumption and the variance-weighted algorithm design, LCB-based algorithms can enhance their dependency on $d$ by employing the reference-advantage decomposition [2] or data splitting scheme [JYW21]. So I would like to maintain my belief that asserting an improvement over LCB-based algorithms remains improper.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the further comments on the comparison with LCB-based algorithms. We agree that for the enhanced dependence on $d$ alone, the LCB-based algorithm in [2] does not need the variance-weighted algorithm design. We, however, remark that, as far as we can tell, this enhanced dependence on $d$ in [2] requires the explorative assumption (their Assumption 1) (e.g., please see the “High-order Error from Correlated Advantage Function” paragraph in Section 5 of [2] and the detailed proof of Theorem 1 in Section D of [2]). To the best of our knowledge, we are unaware of any reference-advantage decomposition argument that has an enhanced dependence on $d$ without some explorative assumption such as Assumption 1 of [2]. If we’ve missed any reference otherwise, we would be happy to take any suggestion to make our comparison more accurate.
We also agree that the data splitting scheme in [JYW21] also has an enhanced dependence on $d$. However, the data splitting scheme in [JYW21] only uses $K/H$ episodes (instead of using all $K$ episodes) for estimating the value functions. As a result, the improved $d$ dependence of the data splitting in [JYW21] comes at the cost of much looser dependence on $H$. As a concrete example, in the finite spectrum condition and under the well-explored dataset, the sub-optimality presented in Proposition 4.11 of [JYW21] incurs an additional factor of $\sqrt{H}$ (as also discussed by [JYW21] in the paragraph after Proposition 4.11).
We appreciate your suggestion and will continue to update our revision to make accurate comparisons with LCB-based algorithms ([1][2]) based on the discussion here. | Summary: This work aims to point out what enables the sample efficiency of offline reinforcement learning.
The authors first define a new notion of data diversity based on the inducing the Bellman error under one policy with the Bellman error under a different policy.
The authors then propose a unified view, Generic Offline Policy Optimisation framework, to study the three offline algorithms: version spaces, regularised optimisation, and posterior sampling.
Under standard assumption, the authors showed that all these three algorithms can achieve comparable upper bounds of the policies' sub-optimality.
Strengths: 1. The proposed view is novel, and can unify the three different types of offline reinforcement learning algorithms.
2. The whole work is very well presented, and the structure is super clear, which raises a high comprehensibility.
3. All definitions and assumptions are explicitly and clearly given in Section 2. I personally love this practice, as it helps a lot on figuring the scope of this work.
4. The results are also well-organised and clearly listed in Section 4.
5. The connection between this work and [TJJ20] is clear, thus it's interesting to see that offline RL can be connected to transfer learning, though there are certain technical differences.
Weaknesses: ### Major
1. **Practical issues of posterior sampling algorithms**:
My biggest concern about this work is the performance of the posterior sampling based algorithms in practice.
I appreciate that the authors remark some feasible solutions to implement an offline RL algorithm based on posterior sampling from line 241 to line 245.
However, given the tractable approximations from line 243 to line 245, I kind of worry the *performance* as well as the *time complexity* of the PS implementations.
I'd be happy to raise my score if the authors can discuss further details about this concern in the updated version.
2. **Discussion about the limitations**:
The authors mainly focus on the missing variance information in their discussion about the limitation of the work.
Given the assumptions listed in the Section 2.3 as well as the practical issues I mentioned above, the limitations can be expanded to cover all of them.
### Minor
1. $H$ in line 107 and line 110:
The authors can move the notation $H$ from line 110 where it's defined to line 107 where it first appears.
2. $t$ in line 129 is not defined:
Though the meaning of $t$ can be inferred given the context, it's better to be defined before its appearance in line 129.
3. Introduction section is too lengthy
I appreciate the gentle and comprehensive introduction to this work in Section 1.
But, it might be a bit lengthy, and as a result Section 5 is too short to further discussion about the limitation and future directions of this work.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Can the authors comment on the *performance* as well as the *time complexity* of possible PS implementations?
(See the major issue #1 in the weakness section)
2. What are the slight refinements in line 190?
The authors claimed that they have made slight refinements to the RO-based and VS-based algorithms, but they haven't mentioned these refinements anywhere else.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See the major issue #2 in the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback. We address all of your concerns and questions below.
---
#### **Question 1**: **Practical issues of posterior sampling**
**Our response**: We emphasize that **the focus of our paper is on the statistical aspects of offline RL** and **not** on the computational aspects (nor on any practical approximation or implementations). Our remarks on lines 243-245 serve merely as remarks regarding implementation/approximation for the interested readers/practitioners, rather than a formal treatment. We do not claim any practical insights into posterior sampling as a contribution.
Regarding the performance and computational considerations regarding posterior sampling in practice, we would like to make the following comments. First, our posterior sampling is oracle-efficient, if we assume access to the expectation-computing oracle and the sampling oracle. As we noted in our paper (Lines 241-245), the expectation-computing oracle can potentially be replaced by another sampling oracle, inspired by the recent idea of [AZ22]. More specifically, $f’$ in the denominator of the likelihood in line 1 of Algorithm 4 can be replaced by a random sample from an inner-loop posterior distribution. In turn, the sampling oracle in practice can be approximated by first-order sampling methods such as Langevin Monte Carlo. A rigorous treatment of this idea in our case is another avenue for future work.
Second, the performance (both statistically, computationally, and empirically) of such first-order sampling approximation to posterior sampling (even for online RL and especially for offline RL) is another active research area. As a concrete example of the success of first-order sampling approximation to posterior sampling, a recent work of [2] shows that in (online) high-dimensional linear contextual bandits, one can use Langevin Monte Carlo (LMC) to approximate the “feel-good” Thompson sampling [3] (a posterior sampling algorithm that lays a foundation for our pessimistic posterior sampling algorithm) where LMC obtains an optimal sample complexity of $\mathcal{O}(d^2/\epsilon^2)$ with the computational complexity of $\mathcal{O}(d^{9}/ \epsilon^8)$ ([2] also showed the superior performance of LMC compared to standard posterior sampling in their experiments).
Third, as a note about using sampling in place of optimization, in certain non-convex settings, sampling even converges provably much faster than optimization (e.g., $\mathcal{O}(d/\epsilon)$ or $\mathcal{O}(d^2 \log(1/\epsilon))$ vs $\Omega((1/\epsilon)^d)$ in the non-convex setting considered in [1]). Thus, it might be possible that the computational complexity of the LMC algorithm (to approximate our pessimistic posterior sampling) is smaller than that of the regularized optimization-based algorithm in certain scenarios. However, this speculation needs a more formal characterization which is also an avenue for future work.
We will add this discussion to our revised version.
**Relevant refs**:
[1] Yi-An Ma, Yuansi Chen, Chi Jin, Nicolas Flammarion, and Michael I. Jordan. “Sampling can be faster than optimization”. PNAS 2019.
[2] Tom Huix, Matthew Zhang, and Alain Durmus. “Tight Regret and Complexity Bounds for Thompson Sampling via Langevin Monte Carlo”. AISTATS 2023.
[3] Tong Zhang. “Feel-Good Thompson Sampling for Contextual Bandits and Reinforcement Learning”. SIAM 2022. (cited as [Zha22] in our submission)
---
#### **Question 2**: **Clarification of ”Slight refinements”**
**Our response**: We consider episodic time-inhomogeneous MDP instead of discounted MDP, thus the algorithms are refined accordingly. For the version space algorithm, compared to [XCJ+21], we employed the actor-critic framework instead of directly solving the min-max optimization over the function class and the policy class.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses from the authors. Since my major concern #1 as been well alleviated, I decided to raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking your time to read our rebuttal and for giving your feedback. We will integrate your suggestions (including the “minor” ones) in our next revision. | Summary: This paper introduces a novel data coverage measure called "data diversity," which is more stringent than existing data coverage measures like single-policy concentrability. The authors claim that actor-critic algorithms based on VS, RO, and posterior sampling can achieve state-of-the-art sample complexity. The posterior sampling based algorithm is the first of its kind for offline RL.
Strengths: The new data coverage measure is innovative and tighter than existing measures in the literature. The authors' demonstration of good sample complexity under this measure is an important contribution. Additionally, the framework and the posterior sampling approach for offline RL are novel.
Weaknesses: However, I have some doubts about the claim that these algorithms achieve the state-of-the-art convergence rate. The policy $\hat{\pi}$ is obtained by uniformly drawing a $t$ from $[T]$ and letting $\hat{\pi}$ be $\pi^t$. This policy is not a Markov policy. According to the definition of $V^{\pi}$, $V^{\hat{\pi}}=\frac{1}{T}\sum_{i=1}^TV^{{\pi}^i}$. Is this correct? The $1/T$ term comes from the additional expectation over the uniform distribution. For the chosen $t$, $\pi^t$ can have constant variance, and as a result, the high probability bound may not hold for this chosen $\pi^t$.
On the other hand, recent literature such as "Optimal conservative offline RL with general function approximation via augmented Lagrangian", "Offline Primal-Dual Reinforcement Learning for Linear MDP", and "Revisiting the linear-programming framework for offline RL with general function approximation" have policies that are Markov policies and achieve the state-of-the-art sample complexity. Therefore, the authors should compare their results with these papers.
To ensure a fair comparison, the output policy should be a Markov policy. If not, the authors need to clarify this in the introduction and after the theorems. Additionally, the authors should clearly explain the source of randomness for $\hat{\pi}$ in the theorems.
If the authors can address these issues clearly, I would increase the rating.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback. We address all of your concerns below.
---
#### **Concern 1**: **"This policy ($\hat{\pi}$) is not a Markov policy ... "**
**Our response**:
We understand our notation in line 7 of Algorithm 1 might have caused some confusion. To clarify, $\hat{\pi}$ is simply chosen uniformly from the policy set $ \{ \pi^1, …, \pi^T \}$. We emphasize that $\hat{\pi}$ (as well as each $\pi^t$) **is Markovian** (i.e., at each step $h$, each $\pi^t_h$ depends solely on the current state). The uniform mixture of policies (resulting from the multiplicative weights algorithm) and its Markovian property are quite standard in the literature, e.g., see [ZWB21], [XCJ+21], [CXJA22].
---
#### **Concern 2**: **Comparison with recent literature [1][2][3]**
> **Relevant refs**:
> [1] Paria Rashidinejad, Kunhe Yang, Stuart Russell, Jiantao Jiao. "Optimal conservative offline RL with general function approximation via augmented Lagrangian". ArXiv 2022 (cited as [RZY+22] in our submission).
> [2] Germano Gabbianelli, Gergely Neu, Matteo Papini. "Offline Primal-Dual Reinforcement Learning for Linear MDP". ArXiv 2023.
> [3] Asuman Ozdaglar, Jiawei Zhang, Kaiqing Zhang. “Revisiting the linear-programming framework for offline RL with general function approximation". ArXiv 2022.
**Our response**:
We believe that the primal-dual methods, e.g. [1][2][3] and [ZHH+22, CJ22] provide an important alternative to addressing offline RL. However, as we also acknowledged in footnote 5 of our submission, the guarantees of primal-dual methods use **a different set of assumptions** than the value-based methods we considered (the former assumes realizability for the ratio between the state-action occupancy density of the target policy and the state-action occupancy density of the behavior policy, except for [2] where this assumption is implicitly realized under a stronger assumption of linear MDP). This makes the results presented in our paper and the results in [1][3] **not directly comparable**, though our results and [1][3] both achieve the optimal sample complexity of $\mathcal{O}(\epsilon^{-2})$.
Since [2] (which appeared online on May 22 after the main submission deadline) works in linear MDP, [2] is more comparable to the instantiation of our results to the linear function class. [2] consider primal-dual methods for offline RL in both infinite-horizon discounted MDP and average-reward MDP. Our analysis framework for the regularized optimization method in the episodic MDP should translate to the infinite-horizon discounted MDP as well, where the regularized optimization achieves the optimal sample complexity of $\mathcal{O}(\epsilon^{-2})$ while the sample complexity in [2] in this setting is $\mathcal{O}(\epsilon^{-4})$. However, [2] offers a better computational complexity ($\mathcal{O}(K)$ vs $\mathcal{O}(K^{7/5})$ ) (where $K$ is the number of offline episodes) and also works in the average-reward MDP setting which is beyond the episodic MDP setting considered in our work; though our bounds hold for general function approximation that is beyond the strong assumption of linear MDPs in [2]. We will add this discussion and comparison in our revision.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for answering my questions. However, I do not think my concerns have been fully adreesed.
The authors give high probability bound but have not clearly tell the readers what the randomness comes from. Does the randomness contains the randomness for the uniform distribution for selecting a policy from $\{\pi^1,\cdots,\pi^T\}$?
I am also not convinced that the mixed policy is a Markov policy. The index is selected in the begining by uniform sampling and does not change. At any horizon, the action depends not only on the current state but also depends on the random index determined in the begining. So I do not think it is a Markov policy.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for further clarification of the questions when our initial response was not clear enough.
**Randomness in the high-probability bound**. The randomness in the high probability bounds is solely from the randomness of the offline dataset, not from the policy mixture. To describe it in a context, the value sub-optimality (defined in Eq. (1)) of $\hat{\pi}$ in our theorems can be written explicitly as follows:
$SubOpt_{\pi}(\hat{\pi}) = V_1^{\pi}(s_1) - V_1^{\hat{\pi}}(s_1) = V_1^{\pi}(s_1) - \frac{1}{T} \sum_{t=1}^T V_1^{\pi^t}(s_1)$.
We thank the reviewer again for the good question. Even though the mixture policy is standard in the literature [ZWB21,XCJ+21,CXJA22], we will make sure to clarify the source of the randomness of our high-probability bound statements in our revised version.
**Markovian of the mixture policy**. We remark that even though $\hat{\pi}$ is a mixture of several policies, i.e., $\hat{\pi}\_h(a_h|s_h) = \frac{1}{T} \sum_{t=1}^T \pi^t_h(a_h|s_h)$ for all $h \in [H]$, $\hat{\pi}_h$ is Markovian by definition since each $\pi^t_h$ is Markovian. We explain further in the following.
By definition, a Markovian policy is a mapping solely from the current state to a distribution over the action space, completely independent of any previous states and actions.
First, each $\pi^t_h$ is Markovian because it is fully characterized by another Markovian policy $\pi^{t-1}_h$ (by induction from the uniform policy $\pi^1_h$ which is Markovian), and the state-action value function estimate (that depends solely on the current state and action, not on any previous states and actions). For the full formulae of $\pi^t_h$, please see Line 5 of Algorithm 1 in our submission.
Second, note that
$\hat{\pi}\_h(a_h|s_h) = \frac{1}{T} \sum_{t=1}^T \pi^t_h(a_h|s_h)$ (please notice the same time index $h$ on both sides of the equation), the actions from $\hat{\pi}\_h$ solely depends on the current state $s_h$, not on any previous states and actions $s_{h’}, a_{h’}$ for $h’ < h$. Thus, by definition, each $\hat{\pi}_h$ is also Markovian. In other words, mixture — a simple weighting across several Markovian policies at the same time index $h$, does not break the Markovian property.
As a concrete example, let’s consider the simple case that $T = 2$ and $\mathcal{A} = $ `{`$a_1, a_2$`}`, $\pi^1_h(a_1|s) = \pi^1_h(a_2|s) = 0.5$, and $\pi^2_h(a_1|s) = 0.2$, $\pi^2_h(a_2|s) = 0.8$. Then, by the definition of the mixture policy, we simply have $\hat{\pi}_h(a_1|s) = 0.5(\pi^1_h(a_1|s) + \pi^2_h(a_1|s)) = 0.5 * (0.5 + 0.2) = 0.35$ and $\hat{\pi}_h(a_2|s) = 0.5(\pi^1_h(a_2|s) + \pi^2_h(a_2|s)) = 0.5 * (0.5 + 0.8) = 0.65$. The mixture policy $\hat{\pi}_h$ is obviously Markovian (the probability action at each time step $h$ depends solely on the current state at time $h$, not on any previous states and actions).
We’re happy to answer new questions if our clarification is still not clear enough. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Retrieval-Augmented Multiple Instance Learning | Accept (poster) | Summary: This work proposes RAM-MIL: an approach that uses feature alignment in MIL with an optimal transport based method to retrieve nearest bag representations between the train set and target set. This approach shows consistent improvements in ID and OOD settings. Ablation studies demonstrate the relative effectiveness of different ways to compute the nearest neighbors.
Strengths: - The paper brings forth ideas from Unsupervised Domain Adaptation and feature alignment to improve OOD performance in the MIL setting, which is a pressing concern to address the real world deployment of these models. The assumption of using attention values as probability mass values is a logical choice.
- Extensive experimentation and ablations have been conducted to show the effectiveness of the Optimal Transport approach over simpler options like naively calculating nearest neighbors at a bag representation level. The results show significant improvements in both OOD and ID settings.
Weaknesses: - My main concern is around the computational cost of inference with RAM-MIL being high due to calculating the distance measures between the train and the retrieval set. Also, in cases where the test set is dynamic, such as one more WSI gets added to the test set, all the pairwise distances between the two sets will need to be recalculated
- The section about interpretability with OT for MIL assumes that instance labels for target bag are known, which is a strong assumption. I would suggest removing this section from the paper and focusing on the OOD generalization part.
- Camelyon-17 has data from 5 different medical centers. It might be worth conducting an experiment where data from a single medical center is used in training, while the other centers are used in retrieval set, since these heldout centers clearly correspond to OOD sets. It is not clear if Camelyon-17 is completely OOD wrt Camelyon-16.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - The authors mention using top 20% of instances for Camelyon-16 and 17. How much of this choice is a function of the Camelyon datasets where signal is often present in small foci? How will this change in MIL problems like NSCLC and RCC subtyping where the signal is diffused throughout the WSI? [1] has presented some findings on OOD generalization on these 2 datasets.
- The motivation behind the Dimensionality Reduction experiments is unclear. Is the idea to use the reduced dimensions to perform inference in certain time-sensitive applications?
- Is there any additional computational lift for running this method in a multi-class setting?
[1] - SC-MIL: Supervised Contrastive Multiple Instance Learning for Imbalanced Classification in Pathology
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: - The authors mention the limitation of the complexity of the OT approach scaling with the number of instances in the Conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments. Here are the responses to weaknesses (W), Questions (Q) and limitations (L).
**Response to W1:**
We report the computation time of RAM-MIL using L2, Hausdorff, approximate-OT and OT for 1 pair of whole slide images as follows. The approximate-OT provides a close performance to full OT while maintaining a low computation cost.
|| L2 | Hausdorff | Approximate-OT | Full OT |
|-------------|---------|---------|---------|---------|
| Time (s) | 0.001 | 1.4 | 0.083| 0.498|
> Also, in cases where the test set is dynamic, such as one more WSI gets added to the test set, all the pairwise distances between the two sets will need to be recalculated
This statement is not true. Note that when a new slide is added to the test set, only the pair-wise distances between this slide and the existing retrieval slides (say the number is N) should be calculated, for future usage. We will clarify this in an updated version.
Furthermore, considering a more practical case of streaming data, more efficient approximation can be used for efficiently deploying the streaming inference. For example, a naïve method is to cluster the existing retrieval slides and only compute the distance with existing cluster centers (say the number is C). This will reduce the number of OT-computation by from O(N) to O(C). In this case, OT-computation time would be trivial.
We agree that the dynamic test set is an interesting research topic, but it is out of the scope of this submission. We will leave it as future work.
**Response to W2:**
Thanks for mentioning this typo in Section 5.3. We meant to say “Suppose that we are interested in the instance labels of the target bag, and suppose that the instance labels of the source bag are known.” Switching all “target” and “source” would disambiguate the section 5.3. We will make this revision in the manuscript to make it clear.
**Response to W3:**
We add an experiment that use 2 hospitals of Camelyon17 as the source dataset and 3 other hospitals as the target dataset. Here are the results:
||AUC| ACC|
|-----------|--------|--------|
|CLAM|0.8155|0.7222|
|RAM-MIL|**0.8209**|**0.7667**|
**Response to Q1:**
We list the averaged accumulated attention probability values for 10% and 20% patches from Camelyon16, Camelyon17 and TCGA-NSCLC datasets. We observe that NSCLC also present similar signal distribution as Camelyon16 and Camelyon17. We agree with [1] that the imbalanced distributions of positive patches in different WSIs discussed are a crucial issue and will add discussions about [1] to the related work section. The extention of RAM-MIL to the imbalanced positive patch distribution will be an interesting future work.
|| C16 | C17 | TCGA-NSCLC |
|-------------|---------|---------|---------|
| 10% of patches | 0.9661 | 0.9667 | 0.9969|
| 20% of patches | 0.9882 | 0.9805 | 0.9999|
**Response to Q2:**
Note that the intrinsic dimension is a tool to analyze and validate our methodology. Distinguished from normal dimension, intrinsic dimension is measured by how many dimensions are needed to represent the data without much loss of information. A lower intrinsic dimension would make the learning easier, as the data lies on a less complex manifold or less noisy.
The idea of Section 3.3 - Reduction of Intrinsic Dimension is to show, our proposed retrieval method could reduce the intrinsic dimension of features, thus make the learning of MIL easier. That explains why RAM-MIL has better performance compared with other methods.
The idea of Section 5.2 – Dimensionality Reduction is to show, simply reducing the normal dimension by SVD can have bad performance. Retrieval could help organizing the latent space and reducing the intrinsic dimension. Thus, SVD after retrieval has a better performance than the SVD-only approach.
**Response to L1:**
Thanks to the reviewer’s suggestions. During the rebuttal period, we found that using attention probability to select few patches is technically sound. Using 10% patches might saturate and the performance does not increase. This limitation is no longer severe based on the observation. We will test more datasets to see if this conclusion is consistent and will post the result as soon as it is available.
---
Rebuttal Comment 1.1:
Comment: I have gone through the authors' rebuttal response. The authors have provided clarifications around the computational complexity of RAM-MIL in dynamic test set, intrinsic dimensions and the typo around having access to instance labels of target vs source set. It is interesting to see the gap between accumulated probabilities for 10% vs 20% of patches for Camelyon datasets (where signal is often in small foci) vs NSCLC subtyping (where signal is more diffused). I would have expected lower difference between 10% and 20% in Camelyon compared to NSCLC, however the results here show the opposite (2-1.5% in Camelyon vs 0.003% in NSCLC). Do the authors have a reason behind why this could be happening?
---
Reply to Comment 1.1.1:
Comment: Thanks for the responses!
We agree that the positive WSIs in NSCLC have larger proportion of positive patches, as indicated by *Wang et al*. As for the attention mismatch problem, there are two major reasons:
1. In a trained attention-based multiple instance learning (ABMIL) model, a high attention probability does not necessarily represent a positive patch. This problem was discussed theoretically in Bayes-MIL (*Cui et al*). The high attention probability only converges to positive patches under strict assumptions or with a strong regularization of the model.
2. The logic of ABMIL is, when only one patch is positive, the slide is regarded as positive. Therefore, the generated attention values could possibly highly concentrate on a few patches, if a model is not well regularized.
This explains why the accumulated attention values have no correlations with the size of positive area. They are influenced more by the model optimization process. However, this does not affect that we use the attention probability for calculating the OT, as negative patches also contain information and contribute to the similarity between slides.
*Wang et al*. Targeting tumor heterogeneity: multiplex-detection-based multiple instance learning for whole slide image classification. Bioinformatics.
*Cui et al*. Bayes-MIL: A New Probabilistic Perspective on Attention-based Multiple Instance Learning for Whole Slide Images. ICLR 2023. | Summary: The paper proposes a new Multiple instance learning method called RAM-MIL. The proposed method is a two-stage approach which mainly involves the following steps: 1. Train an existing MIL model on the D0 to extract feature representations of each instance 2. Retrieve nearest neighbor from retrieval set to form merged bag representation 3. Train another classifier on the merged bag representation. The key component in the proposed method is the use of Optimal-Transport to measure bag-to-bag distance, retrieve nearest neighbor bag and form merged bag representation. The proposed approach improves out-of-domain performance while also maintaining good in-domain performance.
Strengths: 1. writing is mostly clear and easy to follow
2. related work section is comprehensive
3. the idea of using OT for measuring bag-to-bag distance and forming merged representations seems effective.
Weaknesses: while the proposed method improve performance over the compared baselines (Table1), it also introduces additional steps on top of these baseline (finding nearest neighbors, forming merged bag representation and then train another bag classifier), in this sense, the additional performance gain is not that surprising.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Did you describe how Figure 3 is obtained?
2. Can you clarify your experiment setting for the baseline methods (DSMIL, CLAM, TransMIL, Bayes-MIL) in Table1, did they touch the retrieval set at all?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: the efficiency of the algorithm due to solving optimal transport problem is acknowledged.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments. Here are the responses to weaknesses (W), Questions (Q) and limitations (L).
**Response to W1:**
Thanks for the comments. As retrieval (also dubbed as external memory) has achieved great successes in applications like large language models [1][2], we believe it is worth to exploit its application in other important domains, like the challenging histopathological diagnosis. Our paper first provides the theoretical understanding, methodology, algorithms, and analysis for retrieval with multiple instance learning. For the algorithm part of this topic, it is necessary to know how retrieval could be done and how the retrieved knowledge could be merged into the original model.
Note that we provide multiple choices to execute our retrieval algorithm including using L2, approximate-OT, Haursdorff distance and OT. Running the approximate version (L2 or approximate-OT) of the “additional steps” introduce negligible overhead in “finding nearest neighbor” (see stats below). Merging bag representation and train a linear bag classifier causes trivial computational overhead with modern consumer-grade GPUs. **Therefore, the additional gain does not introduce much overhead. Note that TransMIL and Bayes-MIL are heavier model with larger model size and longer running time.** We will update the parameter comparisons and running time comparisons later to show our advantage.
|| L2 | Hausdorff | Approximate-OT | Full OT |
|-------------|---------|---------|---------|---------|
| Time (s) | 0.001 | 1.4 | 0.083| 0.498|
Inspired by your comments, we think it is a good future work to study how to merge the retrieved data to MIL, in an online fashion (which takes even less time). We perform a preliminary test on this with retrieval integrated into the training pipeline of RAM-MIL. Here are the preliminary experimental results, which shows a promising online approximation to the offline-RAM-MIL:
|| C16-AUC | C16-ACC | C17-AUC | C17-ACC |
|-------------|---------|---------|---------|---------|
| RAM-MIL-Online | 0.9263 | 0.875 | 0.7686 | 0.7453 |
| RAM-MIL | 0.9451|0.9200|0.7974|0.7795|
[1] Borgeaud et al. Improving language models by retrieving from trillions of tokens. ICML 2022.
[2] Gun et al. REALM: Retrieval-Augmented Language Model Pre-Training. ICML 2020.
**Response to Q1:**
To compute the intrinsic dimension, we obtain the slide features of each method then feed to the estimator provided by [Facco] for computation.
Here are the procedures for obtaining slide features for different methods:
CLAM: we take the weighted patch features as the slide feature.
Mixup: we randomly pick slide features at the same class as a neighbour, then merge the two features with convex combination.
The rest: these variants merge the slide features of source and retrieval, with retrieval implemented by different methods.
Facco, et al. Estimating the intrinsic dimension of datasets by a minimal neighborhood information. NeurIPS 2017.
**Response to Q2:**
There is no setup of “retrieval set” in the baseline method, as this paper is the first to consider retrieval in multiple instance learning. However, **the baseline methods and our method access the same data under the in-domain setting during training.** The retrieval set is only taken from the in-domain dataset (e.g., training set and validation set of Camelyon16).
For out-of-domain setting, there is a retrieval dataset providing unlabeled out-of-domain data for retrieval. For a fair comparison, we add a method [Yang et al] that also utilize the retrieval dataset and adversarial training in the experiments with Camelyon16 and Camelyon17, as suggested by reviewer 5E1m. Here are the results:
|| AUC | ACC |
|-------------|---------|---------|
| Yang et al | 0.6786 | 0.5867 |
| RAM-MIL | **0.7974** | **0.7795** |
Yang et al. Double adversarial domain adaptation for whole-slide-imageclassification. MIDL 2021.
**Response to L1**
We add the efficiency results. Please refer to "response to W1".
---
Rebuttal Comment 1.1:
Comment: I have read the authors' response. The response addresses my concerns regarding the additional overhead. I also appreciate the elaboration on the motivation of introducing the retrieval procedures.
Q: how did you get the RAM-MIL-Online results? Will that be describe in the paper?
I will consider raising my score to 7. | Summary: The authors propose a novel MIL framework which integrates Wasserstein Distance and Optimal Transport to match instance representations across bags. This method can be further leveraged to match representations across bags from different domains, thereby enabling Cross Domain Adaption.
Strengths: The authors' RAM-MIL method achieved state-of-the-art performance in terms AUC and accuracy on CAMELYON16 and CAMELYON17, as well as MUSH1, MUSK2, FOX, and ELEPHANT. The authors provide interesting insights on the dimension reduction properties of their method. Finally, the authors demonstrate ways to interpret and visualize their method.
Weaknesses: 1. The authors might have unmasked themselves in the double-blind review process by using the same pathology image in Figure 1 as Bayes-MIL
Minor Problems:
1. The authors state Theorem 2, without clearly stating the proof is the supplement. The authors seem to state the proof of a similar result for the feature space is in the supplement
2. Minor typos such as "which which" (line 308)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The authors state that "the calculation of OT quite time-consuming", which is why they pre-selected the top20% of instances. Can the authors give training time comparisons between OT, l2, Hausdorff and Approx-OT? This will help readers understand the practical tradeoffs between l2 and OT.
2. In the supplement, the authors provided a table of results for OT with top10% and top20% of instances. The performance differences seem minor. What would happen with top30% of instances or is this a problem-specific choice depending on the type of disease and relative coverage of disease features in the WSI images?
3. I am not sure if I follow the authors' reasoning in section 5.2 and the corresponding Table 3. The authors state that the intrinsic dimension of CLAM is 5-7 on CAM16 and CAM17, then why would they cap merged bag representation to 5d vector? Would this not impact CLAM more than RAM-MIL and affect its performance? What if it was capped to 7d?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors could discuss the general applicability of RAM-MIL. Is it only to close problems such as CAM16 and CAM17? Can it help with WSI and bag labels that are different diseases?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments. Here are the responses to weaknesses (W) and Questions (Q).
**Response to W1:**
Thanks for mentioning the potential issue. Please note that we follow the double-blind reviewing rules and the submission does not contain any identifying information. For Figure 1, some slides are borrowed from visualization of Bayes-MIL library, which are based on OpenSlide and CLAM library. As Bayes-MIL is a published work and the codes are accessible online (https://github.com/ralphc1212/bayes-mil), we believe it does not violate the anonymous rule to use the same visualization tool in our submission. We will cite Bayes-MIL in the manuscript in Figure 1.
**Response to W-Minor 1:**
As indicated in Line 480-481 of Page 13, supplemental, to get the same results for features, we only need to assume a probability measure over the instance representations h and replacing all X in the proof with H. We will provide a theorem 2 and a formal proof in the supplemental.
**Response to Q1:**
Thanks for the advices. Here are the time comparisons for calculation one pair of slides under different metrics:
|| L2 | Hausdorff | Approximate-OT | Full OT |
|-------------|---------|---------|---------|---------|
| Time (s) | 0.001 | 1.4 | 0.083| 0.498|
Note that Approximate-OT provide a decent trade-off between computation time and performance. Next we show using a few patches (like 10%), the performance might saturate due to the most important information is stored in a few patches. Therefore, it might be a good choice to use a few patches for OT computation.
**Response to Q2:**
Thanks for the advices. We calculate the cumulative attention probability value for 10% and 20% of patches:
|| C16 | C17 | TCGA-NSCLC |
|-------------|---------|---------|---------|
| 10% of patches | 0.9661 | 0.9667 | 0.9969|
| 20% of patches | 0.9882 | 0.9805 | 0.9999|
This indicates most useful information are already in the 10% data, which explains why the performance does not change much when increasing the data amount.
**Response to Q3:**
Section 5.2 is to show directly applying simple dimensionality reduction method like SVD is not useful in improving the performance. By using retrieval then SVD, the performance is significantly better. We add an experiment with capped dimensionality 7:
|| C16-AUC | C16-ACC | C17-AUC | C17-ACC |
|-------------|---------|---------|---------|---------|
| CLAM | 0.8821 | 0.6688 | 0.7568 | 0.5474 |
| RAM-MIL | **0.9312** | **0.905** | **0.7641** | **0.7434** |
**Response to limitations:**
We also add experiments on other datasets: in-domain classification on TCGA-NSCLC, in-domain tumor stage classification with Camelyon17, in-domain classification on CPTAC-LSCC, domain adaptation from CPTAC-LSCC to CPTAC-UCEC.
Here are the results:
TCGA-NSCLC (in-domain):
|| AUC |ACC |
|-------------|---------|---------|
| CLAM | 0.9420 | 0.8640 |
| Scaling ViT | 0.9516 | 0.8821 |
| TransMIL | **0.9603** | 0.8835 |
| Bayes-MIL | 0.9451 | 0.8965 |
| RAM-MIL | 0.9497 | **0.8988** |
Camelyon17 tumor stage classification (in-domain):
|| AUC |ACC |
|-------------|---------|---------|
| CLAM | 0.7803 | 0.60 |
| Bayes-MIL | 0.8070 | 0.64 |
| RAM-MIL | **0.8132** | **0.65** |
LSCC as in-domain dataset and UCEC as out of domain dataset:
|| LSCC-AUC | LSCC-ACC | UCEC-AUC | UCEC-ACC |
|-------------|---------|---------|---------|---------|
| CLAM | 0.9500 | 0.9285 | 0.4986 | 0.6520 |
| RAM-MIL | **0.9667** | **0.9382** | **0.6056** | **0.6527** |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for answering my questions and conducting experiments on more datasets and ablation studies. These results confirm my previous opinion and my rating is unchanged. | Summary: This work presents a method for multiple instance learning-based method for slide retrieval based on optimal transport, called RAM-MIL. The idea is as follows: 1) first train a standard ABMIL model for supervised classification, 2) pre-extract the features from ABMIL to get slide-level features, 3) compute the OT map via SInkhorn's between bag of instance representations (using the attention weights to calculate the OT) within the retrieval set, 4) merge representations followed by training a logistic classifier. RAM-MIL was trained and evaluated in CAMELYON16 and CAMELYON17, with ablation experiments conducted with other OT-based retrieval methods.
Strengths: - This work represents a new method for augmenting MIL with optimal transport, particularly for handling out-of-distribution data. The idea of computing optimal transport maps between instances and bags in histopathology have successfully been performed in prior literature such as HHOT (Yeaton et al. 2022 MIDL), as well as with regard to tackling domain adaptation problems such as Falahkheirkhah et al. 2023 MIDL, Domain adaptation using optimal transport for invariant learning using histopathology datasets. Distinct from prior works is the further application of OT with feature merging at the bag-level for evaluating out-of-domain performance in slide-level classification. At the same time, this work still brings several interesting ideas on how to leverage the attention weights as the probability measures for computing OT maps.
- Beyond the main paper, the supplement is also nicely organized and includes additional ablation experiments regarding % of attention weights used and different merge functions. Further evaluation of patch-level localization performance was also performed, which is appreciated in baselines such as CAMELYON16 (commonly used for slide-level performance only).
- Figures are illustrative and well-designed.
Weaknesses: 1. Regarding novel contribution #1: On line 67, one of the contributions state that "this work is among the first to investigate the out-of-domain performance for MIL, which is vital issue for the application of MIL in automated medical diagnosis with WSI. Our empirical study exposes the risk of recent MIL algorithms under distribution shifts." This claim is not substantiated and untrue, as most clinical studies in computational pathology do consider the problem of out-of-domain performance of MIL models (for instance, Campanella et al. 2018 Nature Med, Lu et al. 2018 Nature BME, Courtiol et al. 2019 Nat Med., etc). Regarding out-of-domain performance for image retrieval, works such as Yottixel (Kalra et al. 2020, Medical Image Analysis) and SISH (Chen et al. 2022, Nature BME) have also performed quantitative evaluation on out-of-domain external cohorts.
2. Baselines: Though methods such as CLAM and Bayes-MIL were used as comparisons, this work lacks comparisons with other important baselines for slide retrieval in histopathology such as SISH and Yottixel, and OT-based pathology works such as HHOT (most similar to this work conceptually). Other missing baselines include performance of conventional ABMIL (that was used to train RAM-MIL), and also linear probe analysis on top of the already pre-extracted ABMIL features. The reviewer would also appreciate hyper-parameters used in linear probe analysis of this work, which may also importantly influence results.
3. Retrieval: Though the contributions of this work are posed as solving a retrieval problem, the evaluation (task, metrics) is focused on solving a popular but narrow task in histopathology (CAMELYON16 - a needle-in-a-haystack problems). Only one dataset is used for evaluation of this method, which may not be representative of broader slide-level retrieval tasks in histopathology. Lastly, one potential practical limitation is the computational complexity of using OT for retrieval, which should be commented on in the rebuttal.
I am overall positive about this method, as this work brings an interesting idea regarding augmenting OT with attention weights from ABMIL in histopathology, my main concern of this work is the lack of evaluation on other datasets such as TCGA, CPTAC, NLST, GTEX, etc. - which can be readily used for evaluating out-of-domain performance and slide-retrieval in a pan-cancer setting, as well as missing baselines.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. How well does this method perform against other baselines such as SISH / Yottixel for slide retrieval? ABMIL with and without linear probe?
2. There exist many histopathology datasets that can be combined to form external cohorts. The findings of this work would be strengthened if more datasets and cancer types were evaluated.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations of this work were addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments. Here are the responses to weaknesses (W) and Questions (Q).
**Response to W1:**
Thanks for mentioning the related works. We will add them into the related works sections.
For the claim of contribution, we meant to say, “the out-of-domain performance of Attention-based Multiple Instance Learning (ABMIL)” rather than the first work considering out-of-domain problem for whole slide image. As it might sound ambiguous, we will change the claim of contribution to a more precise one.
**Response to W2 and Q1:**
For the information retrieval works (Yottixel and SISH) of whole slide image, we will add them into the related works section. However, these two methods are not directly comparable, as they focus on the retrieval problem while our goal is to use retrieval to enhance the weakly supervised classification (MIL). An executable way is to replace the OT-retrieval module in RAM-MIL with these retrieval methods. As there is no available implementation of Yottixel, and SISH is a more advanced retrieval method, we integrate SISH retrieval into the RAM-MIL process. The experiment runs on Camelyon17 (including 5 hospitals), with 2 hospitals as the in-domain dataset and the rest being the out-of-domain dataset. Here are the experimental results:
||AUC| ACC|
|-----------|--------|--------|
|CLAM|0.8155|0.7222|
|CLAM-SISH|0.7647|0.7444|
|RAM-MIL|**0.8209**|**0.7667**|
For the OT-based pathology work, we add HHOT into the related works section. HHOT only considers comparing whole slide images with uniform weight-based OT. As they mentioned kNN as a discriminative method, we compare HHOT-kNN with RAM-MIL (C16 as in-domain, C17 as out of domain):
||C16-AUC|C16-ACC|OOD-C17-AUC|OOD-C17-ACC|
|----------------|---------|---------|---------|---------|
|HHOT-kNN (k=1)|0.7007|0.7318|0.6939|0.7173|
|HHOT-kNN (k=3)|0.7618|0.7544|0.7263|0.7523|
|ABMIL|0.9010|0.875|0.7287|0.7189|
|ABMIL w/ linear probe|0.9107|0.8725|0.7312|0.7077|
|RAM-MIL|**0.9451**|**0.9200**|**0.7974**|**0.7795**|
We also include ABMIL with/without linear probe in the above table.
**Response to W3 and Q2:**
We add experiments with in-domain classification on TCGA-NSCLC, in-domain tumor stage classification with Camelyon17, in-domain classification on CPTAC-LSCC, domain adaptation from CPTAC-UCEC to CPTAC-LSCC.
Here are the results:
TCGA-NSCLC (in-domain):
|| AUC |ACC |
|-------------|---------|---------|
| CLAM | 0.9420 | 0.8640 |
| Scaling ViT | 0.9516 | 0.8821 |
| TransMIL | **0.9603** | 0.8835 |
| Bayes-MIL | 0.9451 | 0.8965 |
| RAM-MIL | 0.9497 | **0.8988** |
Camelyon17 tumor stage classification (in-domain):
|| AUC |ACC |
|-------------|---------|---------|
| CLAM | 0.7803 | 0.60 |
| Bayes-MIL | 0.8070 | 0.64 |
| RAM-MIL | **0.8132** | **0.65** |
LSCC as in-domain dataset and UCEC as out of domain dataset:
|| LSCC-AUC | LSCC-ACC | UCEC-AUC | UCEC-ACC |
|-------------|---------|---------|---------|---------|
| CLAM | 0.9500 | 0.9285 | 0.4986 | 0.6520 |
| RAM-MIL | **0.9667** | **0.9382** | **0.6056** | **0.6527** |
---
Rebuttal Comment 1.1:
Comment: We also updated the related works including the mentioned methods by the reviewer, drafted as “Update of related works” in responses to reviewer 5E1m. Please check.
-Authors | Rebuttal 1:
Rebuttal: We thank all reviewers for the valuable feedbacks and constructive comments. We have prepared the response and revised the manuscript accordingly to address your concerns. The major concerns and the corresponding answers/explanations/clarifications are as follows:
1. Related works (5E1m, WXnS, 7vyh) and datasets (WXnS, 7vyh): we add the experiments to compare our method with Yang et al (domain adaptation for MIL, mentioned by 5E1m), HHOT (OT-method for WSI, mentioned by WXnS) and SISH (Nearest neighbor search method, mentioned by WXnS) and will include the missing citations mentioned in the related works. RAM-MIL outperforms the related methods in the experiments.
We also add experimental results on new datasets including TCGA-NSCLC, CPTAC-LSCC and CPTAC-UCEC (WXnS). New experiments with Camelyon17 multi-class tumor stage classification and domain adaptation between hospitals from Camelyon17 (7vyh) are included. Please check the one-page pdf for details.
2. Intrinsic dimension and dimensionality reduction (5E1m, yGH7, Q4PZ and 7vyh): we made additional explanations about intrinsic dimension which is misunderstood as normal dimension. The data are assumed to lie on a low-dimensional manifold with an intrinsic dimensionality, which is lower than the normal dimensionality.
3. Efficiency of retrieval (yGH7, q4PZ, 7vyh) and percentage of patches to calculate OT (yGH7, 7vyh): we test the retrieval efficiency of different variants of RAM-MIL - L2, Hausdorff, approximate-OT, full-OT, which have trade-off in performance and efficiency. Approximate-OT provides a decent performance with a high retrieval efficiency. Users could select the distance metrics based on the need.
During the rebuttal period, we found that using a fixed attention probability to select few patches is technically sound. Using 10% patches might saturate and the performance does not increase.
4. Others: Dimension manipulation with pre-trained model (5E1m), ABMIL with or without linear probe (WXnS), online retrieval (Q4PZ) and dynamic evaluation (7vyh) are answered with new experimental results accordingly. The minor problems like typos will be updated in the manuscript.
We hope that our response can address the mentioned weaknesses and concerns, and the reviewers could raise their scores accordingly.
Pdf: /pdf/761084d2c892328198d69d3e37f9621e2b3883be.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces the Retrieval-AugMented MIL (RAM-MIL) framework, which addresses the challenge of performance deterioration in MIL models when tested on an out-of-domain test set. The proposed framework achieves state-of-the-art performance in both in-domain and out-of-domain scenarios. The RAM-MIL framework integrates Optimal Transport as the distance metric for nearest neighbor retrieval and uses the transportation matrix derived from OT to render the retrieval results interpretable at the instance level. The paper also proves a theorem that demonstrates the reduced input dimensions lead to improved MIL performance and proposes the RAM-MIL algorithm to learn a low intrinsic dimension feature space and enhance the model generalization, especially for out-of-domain data. The proposed methodology is evaluated on whole slide image (WSI) datasets and general MIL datasets.
Strengths: This article relates the input data dimension to the performance of MIL and provides a novel theoretical explanation. This perspective is very innovative.
The article addresses the unsupervised domain adaptation problem by retrieving instances with similarities from the unlabeled domain and merging them with instances from the labeled domain. This approach impresses me deeply.
Weaknesses: 1.The author mentioned that they are the first to explore the problem of domain adaptation in pathological image domains. However, there have been some existing works in this area, including domain adaptation for whole slide images. For example:
[1] Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis
[2] Double adversarial domain adaptation for whole-slide-image classification
[3] Adversarial Domain Adaptation for Classification of Prostate Histopathology Whole-Slide Images.
[4] Unsupervised Domain Adaptation for Classification of Histopathology Whole-Slide Images
2.(1) As mentioned in Comment 1, there have been numerous studies in this field. However, this article lacks a detailed review of domain adaptation issues in pathological images, including whole slide images. It also fails to provide a comprehensive explanation of the differences between the proposed method in this article and those previous studies.
(2) Additionally, the experiments in this article only compare against baseline and self-variants, without comparing with the currently most advanced methods available.
3.The author mentioned, "Our theoretical result based on Wasserstein distance in the input space shows a negative correlation between **MIL performance** and the **input data dimension**, which motivates us to propose the novel RAM-MIL framework based on the OT."
(1) What does the term "input data dimension" refer to? Is it the feature dimension of the input instances? Does "MIL performance" refer to the performance of unsupervised domain adaptation or the performance of MIL itself?
(2) As far as I know, most current MIL methods first use a pretrained self-supervised network to map instances from the image domain to the feature domain, and the dimension of these features can be defined by the pretrained self-supervised network. Therefore, can't the input data dimension issue raised in this article be simply solved by modifying and defining it through a pretrained network?
(3) In conclusion, I haven't understood the relationship between the "input data dimension" and "MIL performance" in this article. Why does reducing the input data dimension improve performance? To what extent should it be reduced at least? How does this article's method determine the optimal input data dimension?
4.Is the Merge operation performed by weighted averaging? How is the Merge operation specifically completed?
I'm looking forward to the author's reply, and after my questions are answered, I will reconsider my rating.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to Weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please refer to Weaknesses. I'm looking forward to the author's reply, and after my questions are answered, I will reconsider my rating.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments. Here are the responses to weaknesses (W).
**Response to W1 and W2.1:**
Thanks for mentioning the related works. We will add discussions about [1, 2, 3, 4] into the related works and [2] will be added to comparisons in the evaluation.
Note that [1, 2, 3, 4] are **NOT** attention-based multiple Instance Learning model for whole slide image classification. [1, 3, 4] are domain adaptation methods for the instance (patch)-level classifier rather than multiple instance learning model. The instance-level classifiers neglect the global correlation between patches and require instance-level labels for training the classifier. The most related work is [2], where patch features of a whole slide image are aggregated using the Fisher vector encoding and global pooling. The fundamental difference between [2] (also noted as Yang et al in other responses) and ours is that, our method is based on attention-based multiple instance learning where the patch-level features are aggragated by learnable attentions.
Nevertheless, we add an experiment for comparing with [2] which will be added to the manuscript. As the datasets (CCI dataset and Warwick HER2 challenge) used in [2] are not accessible now, we evaluate [2] on with our experimental setting, using Camelyon16 as the in-domain dataset and Camelyon17 as the out of domain dataset. Accuracy and AUC of [2] are **58.67** and **67.86**, outperformed by RAM-MIL (Acc: **77.95**, AUC: **79.74**).
**Response to W2.2**
> Additionally, the experiments in this article only compare against baseline and self-variants, without comparing with the currently most advanced methods available.
We did compare with most advanced methods including CLAM, TransMIL and BayesMIL. In the rebuttal, we also add experiments comparing with [2] (domain adaptation for MIL, see responses to W1 and W2.1), HHOT (OT-method for WSI, see responses to reviewer WXnS) and SISH (Nearest Neighbour Search method, see responses to reviewer WXnS).
**Response to W3.1**
Thanks for spotting one critic motivation of our work. The sentence at the line 70, page 2 is not accurate and will be clarified in the updated version. “input data dimension” should be “the *intrinsic dimension* of input data”, which should be distinguished from the normal dimension. We assume that the data reside on a low-dimensional manifold in the input data space, so the intrinsic dimension is less than the normal dimension. As mentioned in the paper (line 111-113, page 3), the intrinsic dimension is measured by how many dimensions are needed to represent the data without much loss of information [Facco, et al.]. The MIL performance refers to the approximation error of an MIL model.
Facco, et al. Estimating the intrinsic dimension of datasets by a minimal neighborhood information. NeurIPS 2017.
**Response to W3.2**
Thanks for the advices. Directly reducing the normal dimension to the estimated intrinsic dimension is a brute-force method to improve the performance. To validate if this argument holds, we use a masked auto-encoder (MAE) with output dimension to be 5 (the estimated intrinsic dimension) to extract the features from patches. To ensure a good quality of extracted features, we fine-tune the MAE with all patches from Camelyon16 for 60 epochs. The performance of CLAM fitted on the extracted features is at the second row of the Table:
| Model | C16-AUC | C16-ACC | OOD-C17-AUC | OOD-C17-ACC |
|-------------|---------|---------|-------------|-------------|
| CLAM-SVD | 0.9216 | 0.6625 | 0.7486 | 0.4991 |
| CLAM-MAE | 0.8285 | 0.7368 | 0.7325 | 0.6200 |
| RAM-MIL-SVD | **0.9273** | **0.8475** | **0.7509** | **0.7053** |
Simply reducing the output dimension to the estimated intrinsic dimension cannot provide a decent performance, which provides a justification for our RAM-MIL method.
**Response to W3.3**
*The theoretical results show that a lower intrinsic dimension would lead to a better performance of the multiple instance learning model. This statement is intuitively correct.* We list the intuitions as follows:
1) Lower intrinsic dimension may mean that the data lies on a simpler manifold or subspace. As the complex of the subspace is lower, learning is easier.
2) Lower intrinsic dimension might also imply that the data is less noisy and redundant, which could make learning easier.
Although there exist some methods to test the intrinsic dimension of data, there is no principled method to reduce the intrinsic dimension and determine the optimal intrinsic dimension. Our paper proposes a retrieval-augmented MIL that could potentially reduce the intrinsic dimension and it is validated in the paper (Figure 3 and Table 3).
**Response to W4**
As indicated in line 155, page 4, the merging function is a convex combination. We have tested different configurations of the merging function, shown in Table 6, page 14, supplemental materials. We will make this clear in the updated version.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's thoughtful response, and I have carefully reviewed the author's rebuttal. Overall, I partially acknowledge the novelty of the Retrieval-Augmented MIL method proposed in this paper and the research perspective that 'lower intrinsic dimension would lead to a better performance of the MIL model.' As I mentioned in my review, I have two main concerns about this article.
**The first point** is the need for a more detailed overview and comparison of domain adaptation in WSI Classification. Domain adaptation in pathological image domains has been developed for a considerable amount of time, with various technical methods at both slide-level and instance-level [1], such as solutions involving stain normalization, feature-level domain adversarial learning, spatial-based cycle migration, and others. This paper introduces a Retrieval-Augmented solution at the feature level, and I acknowledge the novelty of this approach. However, the authors did not comprehensively and systematically review and compare the literature in this domain in their paper, while highlighting the innovation and advantages of their method. In the experimental comparison, it is important to step beyond the 'self-variant,' including a comparison with state-of-the-art methods from other paradigms within this domain. For instance, domain adaptation in WSI Classification typically focuses on addressing staining variations. How does Retrieval-Augmented at the feature level compare in terms of advantages? How does the performance of feature-level domain adversarial learning perform under the same experimental settings as this paper? In summary, I believe that the current review and comparison of other paradigms within the same domain are not comprehensive enough in this paper.
**The second point** is that the study of feature dimensions in this paper is not sufficiently thorough and comprehensive. As I mentioned in my review,
> most current MIL methods first use a pretrained self-supervised network to map instances from the image domain to the feature domain, and the dimension of these features can be defined by the pretrained self-supervised network. Therefore, can't the input data dimension issue raised in this article be simply solved by modifying and defining it through a pretrained network?
Although the author validated this idea by constructing a simple MAE in the rebuttal, this is not comprehensive enough. Currently, most MIL articles on the Camelyon 16 dataset, such as [2,3,4,5], use pretrained self-supervised models or ImageNet pretrained models to extract features beforehand and achieve good performance (much higher than the C16-AUC 0.8285 mentioned in the author's rebuttal). As far as I know, the dimensions of these self-supervised or ImageNet pretrained model features can be customized through simple network modifications during training. The claim that reducing feature dimensions can enhance MIL's performance needs more comprehensive validation. Moreover, under the same feature dimensions, how advantageous is the Retrieval-Augmented method?
Another issue is determining the optimal feature dimension. In the rebuttal, the author conducted an experiment using MAE with a feature dimension of only 5 (the estimated intrinsic dimension). How is this 'estimated intrinsic dimension' calculated? Generally, self-supervised feature dimensions are around 128-256 dimensions. The author mentions that 'there is no principled method to reduce the intrinsic dimension and determine the optimal intrinsic dimension. Our paper proposes a retrieval-augmented MIL that could potentially reduce the intrinsic dimension and it is validated in the paper.' This is quite confusing.
Finally, I would like to express my gratitude once again for the author's efforts and thoughtful response. I am very interested in this article and am willing to consult and discuss with the author, reviewers, and the AC. I look forward to the author's more comprehensive summary and response, as well as the sharing and guidance from other reviewers and the AC. According to the standards of top academic conference, I anticipate that this article will become a leading piece of research in the field.
[1] Srinidhi, Chetan L., Ozan Ciga, and Anne L. Martel. "Deep neural network models for computational histopathology: A survey." Med Image Anal. 67 (2021): 101813.
[2] Li, Bin, Yin Li, and Kevin W. Eliceiri. "Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning." CVPR. 2021.
[3] Shao, Zhuchen, et al. "Transmil: Transformer based correlated multiple instance learning for whole slide image classification." NeurIPS 34 (2021): 2136-2147.
[4] Chen, Richard J., et al. "Scaling vision transformers to gigapixel images via hierarchical self-supervised learning." CVPR. 2022.
---
Reply to Comment 1.1.1:
Title: Responses to the first point
Comment: Many thanks to the reviewer for acknowledging the novelty of our approach. As claimed in our response, we agree with the reviewer that we need to add more discussions in the updated version about existing works in domain adaptation for whole slide images including but not limited to [1,2,3,4]. However, as we claimed in the response, and acknowledged in the survey paper the reviewer mentioned (Srinidhi et. al. – *domain adaptation and stain variability methods, Table 5*), a vast majority of existing methods are based on fully supervised learning on patches, instead of weakly-supervised learning on slides as in our paper. Specifically, fully supervised learning requires patch-level labels, which are usually not accessible in real world. Thus, those papers are not directly comparable with our method which targets at the multiple instance learning setting for slide-level classification, indicated by our submission title. Here are our responses to the two questions.
>For instance, domain adaptation in WSI Classification typically focuses on addressing staining variations. How does Retrieval-Augmented at the feature level compare in terms of advantages?
**Staining transfer:** [Cho et al] and [Shaban et al] are two of the most popular stain transfer methods in pathological image domains, both of which focus on patch-level performance and fail to consider the whole-slide property of the pathological images. Regarding the comparison between methodologies, [Cho et al] and [Shaban et al] need the training of a generative adversarial network (with patches), a task renowned for its inherent difficulty in achieving stable training. In contrast, our proposed RAM-MIL approach relies upon an OT retriever, obviating the requirement for further training, and is more convenient to use. As [Cho et al] has no available code, we will adapt [Shaban et al] with least modification to fit the slide setting. The experimental comparisons with [Shaban et al] will be updated later. *[Updated at the bottom, 2023-Aug-17]*
>How does the performance of feature-level domain adversarial learning perform under the same experimental settings as this paper?
**Domain adversarial learning:** We have provided the experimental comparison between [Yang et al], a domain adaptation method based on adversarial learning mentioned by the reviewer before, and our RAM-MIL: “**We evaluated [Yang et al] on with our experimental setting, using Camelyon16 as the in-domain dataset and Camelyon17 as the out of domain dataset. Accuracy and AUC of [Yang et al] are 58.67 and 67.86, outperformed by RAM-MIL (Acc: 77.95, AUC: 79.74).**” Please see "response to W1 and W2.1".
We sincerely appreciate the valuable comments of the reviewer, which have significantly enriched the depth and scope of our discussion pertaining to domain adaptation. We will add the above discussion to our paper to elucidate the distinctions between RAM-MIL and pre-existing methods with more clarity. We express our appreciation again for the positive appraisal of the novelty of our paper and we are willing to address any further concerns.
*[Cho et al] Cho, Hyungjoo, et al. "Neural stain-style transfer learning using GAN for histopathological images." arXiv preprint arXiv:1710.08543 (2017).*
*[Shaban et al] Shaban, M. Tarek, et al. "Staingan: Stain style transfer for digital histological images." 2019 Ieee 16th international symposium on biomedical imaging (Isbi 2019). IEEE, 2019.*
*[Yang et al] Yang, Yuchen, et al. "Double adversarial domain adaptation for whole-slide-image classification." Medical Imaging with Deep Learning. 2021.*
---
*[Update: 2023-Aug-17]* We run [*Shaban et al*] - a representative CycleGAN based staining transfer method in patch-level in the Camelyon17 domain adaptation setting. For obtaining the WSI features, we use averaged pooling. For fairness, the same classifier as RAM-MIL is used for [*Shaban et al*]. Here are the results:
|-|AUC|ACC|
|-|-|-|
|CLAM-SISH| 0.7647| 0.7444|
|[*Shaban et al*]| 0.6458| 0.6889|
|RAM-MIL| **0.8209**| **0.7667**|
RAM-MIL outperforms the patch-level staining transfer based method.
Another drawback of patch-level staining transfer is, it takes unacceptable time for training a CycleGAN. Per-epoch training time is over 1 hour on a single A100 GPU with 80G memory with only about 5% patches (guided by the paper) from the WSI pool. We finish this experiment with multiple A100s running distributively. Increasing the percentage of patches would further increase the training cost.
--- | null | null | null | null | null | null |
VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset | Accept (poster) | Summary: This paper proposes an automatic pipeline to create captions for videos, where multiple modalities are involved in the generation process of the captions. Authors initially train separate audio and video-based captioners. Then, those audiovisual-driven captions, alongside subtitles are restructured into a single caption by prompting a pertained LLM that is responsible for aggregating multiple text/caption sources. Next, such a dataset is used to train a multimodal model capable of addressing a variety of text-to-video/audio retrieval, generation, and question-answering tasks.
Strengths: 1. This work confirms that today’s unimodal caption generation models (datasets) are very strong baselines (resources) that diminish the need for a multimodal data collection process which could be costly and non-scaleable, thanks to summarization and aggregation capabilities of the latest LLMs.
2. The downstream experiments are comprehensive and show promising results.
Weaknesses: 1. Paper is not easy to follow, especially due to the poor choice of naming for components (i.e features, losses etc) and abbreviations.
2. Paper is highly incremental. It is heavily built on top of numerous existing pertained models and offers almost no technical novelty even in the aggregation step.
3. Authors propose multiple components to the loss function (Eq 1-5). Still, there is almost no discussion on how these different components interact with one another, and impact the quality of learned representations. The experimental section is only focused on downstream tasks and the generated dataset.
4. The dataset generation step is shaped by the power of the pertained models which authors have adopted. They are also combining multiple datasets during pertaining. Hence, outperforming SOTA is not only expected but one wonders whether such comparisons are even fair. At the end of the day, VAST stands on the shoulders of numerous pertained models developed by others.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. On OM-VCM: $f_{omv}$ which is fed to the text encoder plays the role of condition, however, a part of $f_{omv}$ is $f_{s}$ which itself is the output of text encoder. I am not quite sure how this makes sense in terms of the space of inputs to the text encoder. The authors should clarify and elaborate.
2. Line 206: what does “we uniformly model the relations of V-T, A-T, VA-T, VS-T, and VAS-T ” mean?
3. Eq 2: why this cannot be captured via contrastive loss? Is it only due to maintaining temporal resolution? If so, authors should try sequences of length 1 which is [cls] token to confirm that.
4. Line 219: using a single video frame is quite strange. Considering the length of video clips (e.g 5-30 seconds for VAST) does it make sense to pick a single frame, especially if we are aiming to create captions for videos? Did authors try more than 1 frame during pertaining to study the effect of such choice?
5. Line 223-224: What is the contribution of VCM. In other words, what if you equally weigh the losses or reverse them, how do results vary?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: This work relies on multiple pretrained models and datasets. The bias-fairness concerns hence scale considerably due to such a design. On top of it is the choice of leveraging LLMs whose hidden biases are not quite yet clear for the research community. This is not to discourage using such models, but I believe authors can further improve the limitation and broader impact section of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **weakness 1**
Considering that the motivation of our work is to build a more general comprehensive omni-modality foundation model, it unavoidably involves multiple modalities (text, audio, vision, subtitle), multiple features (single/fusion modality features, global/local features), multiple losses (contrastive, matching, captioning). We have indeed paid efforts to make those concepts clear but considering the complexity of algorithms themselves, perhaps the representations seems a little bit complicated and takes some time for understanding, we are sorry about this and will further improve representations in next version.
**weakness 2**
+ Firstly, the technical novalty has been illustrated in the global rebuttal [Common Question 1].
+ Secondly, we think that the statement about "VAST is highly incremental because the single-modality encoders are initialized by other models" is not appropriate. The target of cross-modality pretraining research is to build strong connections between modalities and serve as a miultimodal generlist for downstream tasks like captioning and retrieval, rather than training single modality representations from scratch to serve for single-modality tasks. Likewisely, in CV filed most segmentation and detection methods are developed on the basis of vision backbone (ResNet or ViT), and it's not appropriate to say that all those methods are incremental.
+ Thirdly, training those single-modality encoders from scratch will results in slower convergence and need more computation resource which is not economic and not affordable for all researchers. By contrast, utilizing pretrained models as single-modality encoders can serve as a good start point and make model focus more on cross-modality correlations learning, and this paradigm has been widely adopted by most currently main-stream cross-modality pretraining works (such as BLIP, BLIP-2, i-Code, VALOR, etc.).
**weakness 3**
Thanks for pointing it, we conducted additional ablation experiments with regards to the contribution of different losses. All models are trained on VAST-27M with "V+A+S" modalities used in both pretraining and finetuning. Due to time limitation we only train half of iterations as those experiments in Table 7 of main paper. The results are shown in the following table from which we can find that all losses (Eq1-Eq4) are important for VAST's superior performance. The effectiveness of Eq5 has been demonstrated in the Table 7 of main paper.
| Pretraining Loss | MSR-ret (R@1)| MSR-cap (CIDEr) | MSR-qa (Acc)|
|---|---|---|---|
| - | 37.6| 60.3| 44.6
| OM-VCC | 43.9 | 61.3 | 45.0 |
| OM-VCC + OM-VCM | 46.5 | 63.2 | 45.6 |
| **OM-VCC + OM-VCM + OM-VCG (default settings)** | **47.8** | **68.1** | **46.8** |
**weakness 4**
Due to that web-crawled datasets have limitations in both caption accuracy and diversity, generating training data can supply more high-quality cross-modality samples for training by combining and transfering model’s learned representations into training corpus. This routine has been widely adopted in today’s models such as BLIP, BLIP-2, CLIP-VIP, and MLLMs such as LLaVa and miniGPT4. Combining multiple datasets is a also widely-used way to let model see different data distribution and present overfitting to single corpus.
**question 1**
Considering subtitles (ASR transcriptions in our work) takes also the format of text like captions, we use one single text encoder for both caption and subtitle encoding (share weights for two modalities). An alternative way is to use two separate encoders. From following experiments results we can find using shared one encoder has better performance on five benchmarks and also less total parameters.
| Text and Subtitle encoders| MSR-ret(R@1) | MSR-cap (C) | MSR-qa(Acc) | YouCook2-ret(R@1) | YouCook2-cap(C) |
|---|---|---|---|---|---|
| **Share(default settings)** | **47.8** | **68.1** | **46.8** | **41.2**| **179.0** |
| Seperate | 45.6| 67.8 | 46.3 | 37.7 | 175.0 |
**question 2**
+ Firstly, the V,A,T,S represents for vision, audio, text and subtitle, following the abbreviations of Table 1. Thanks for point it and we will explain this in next version.
+ Secondly, Eq1-Eq4 singly model the relationship between ‘VAS’ and ‘T’, however, as section 4.3 states, in practical scenario there are many circumstances that only some of modalities but not all modalities are available. Considering this, we additionally build the relations between V-T, A-T, VA-T and VS-T to keep training-testing consistency and let model performs well at different modality inputs (Eq 5).
**question 3**
Contrastive loss have already been used in Eq1 (OM-VCC) to pull the global representations of matched samples together. Eq2 is the match loss which infers if a pair of video and caption matched to each other or not. Compared to VCC, VCM encourages more fine-grained interactions between caption tokens, vision or audio patches. Both losses are widely-adopted in current cross-modality researches (usually named ITC and ITM). Above experiments in Weakness 3 also demonstrates the necessity of OM-VCM.
**question 4**
We conduct following experiments with differnet frames utilized in pretraining. Using more frames can imporve performances a liitle bit. Considering the balance of performance and training cost, we choose to sample only 1 frame during pretraining.
| Sampled frames | MSR-ret(R@1) | Msr-cap(CIDEr) | Msr-qa(Acc) |
|---|---|---|---|
| 1 (default setting) | 47.8 | 68.1 | 46.8 |
| 2 | 48.0 | 68.4 | 46.9 |
| 4 | 48.0 | 68.6 | 47.1 |
**question 5**
Indeed, in the training process loss ratio of VCM and VCC have already been equal (1:1). And at test process we first compute similarities according to VCC which is fast and then use VCM to rerank top-50 candidates computed by VCC which can further improve the retrieval accuracy, this process is commonly used in latest vision-language pretraining methods such as ALBEF, BLIP, BLIP-2.
---
Rebuttal Comment 1.1:
Comment: After carefully reading authors rebuttal, I maintain my original rating. | Summary: This paper introduces a large-scale dataset called VAST-27M, which contains audios, videos, subtitles, and generated captions. The authors employ an LLM to combine all forms of textual information to create omni-modality captions. They train an omni-modality video-text foundational model on this dataset. Extensive experiments conducted on various downstream tasks demonstrate the effectiveness of both their dataset and model.
Strengths: - They provide a valuable dataset, and the experiments demonstrate the superiority of this dataset.
- The paper is well-written with clear explanations.
- The experiments are thorough, including the validation of the methodology, the effectiveness of the dataset, and comprehensive comparisons with the state-of-the-art (SOTA) approaches.
- The model achieves state-of-the-art (SOTA) performance on 22 downstream tasks.
Weaknesses: - The authors utilized the Vicuna model instead of ChatGPT, which might have an impact on the quality of the data. Could you provide a comparison of the quality of data generated by ChatGPT and Vicuna?
- Why wasn't there consideration for multimodal LLMs like LLaVa or MiniGPT-4, and instead, but using a separate trained captioning model?
- The methodology lacks technical contributions and doesn't differ significantly from previous works in terms of model architecture and training. When dealing with multiple forms of text simultaneously, the current models simply aggregate multiple losses, which may not fully leverage the available information.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I'd like to inquire about the authors' plans for sharing the data and code, including the captioning model, VAST model, and the dataset.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **weakness 1**
+ Our work needs to generate 27M omni-modality captions, and locally deployed open-resource large language modal (such as Vicuna) can fulfill parallel generation which is efficient. However this is not feasible for ChatGPT due to its API is relatively inefficient and requies additional costs, so we choose to use the former for caption generation.
+ With regards to caption quality comparison between Vicuna and ChatGPT, we visualize two examples in the following table. The same video caption, audio caption, subtitle and instruction are given to both models and we sample omni-modality captions twice. From the examples shown in the following table, we can find that both models can generate relatively high-quality captions and can comprehensively reflect infomation of all modalities. Captions generated by ChatGPT tends to be longer and cover most clues given in the prompts, and can also imagine scences like "wind rustled through the trees" while 'trees' does not appear in the prompts. By contrast, Vicuna tends to generate short but accurate captions, while some details like "splashing" and "mailbox" may be missed.
|Modality|Caption|
|---|---|
| Audio| Wind, birds, women speaking, splashing, and human sounds are heard. |
| Video| Woman picked up a phone and walked around the garden. |
| Subtitle | I can't take delivery now. I'm on vacation, why don't you put it next to the mailbox at the door of my house. |
| Omni(by Vicuna-13b) | A woman walks with a phone in a garden while birds chirp, wind blows, and a human speaks, with the sound of splashing in the background, and the delivery cannot be taken now as the person is on vacation, so they suggest leaving it near the door of their house. |
| | A woman walks with a phone in a garden while wind, birds, and human sounds are heard, and a person is unable to receive a delivery due to being on vacation. |
| Omni(by Chat-GPT) | The woman, while on vacation, picked up her phone and walked around the garden, surrounded by the sounds of wind, birds, splashing, and various human sounds. In a speech subtitle, she mentioned being unable to take delivery and suggested leaving it next to the mailbox at the door of her house. |
| | The woman, who was on vacation and unable to take delivery, walked around the garden, speaking into her phone, while wind rustled through the trees, birds chirped in the background, and various splashing and human sounds filled the air. She suggested leaving the package next to the mailbox at the door of her house. |
|**Modality**|**Caption**|
| Audio | A turkey is heard, followed by coughing, wind, bird chirping, and animal sounds. |
| Video | Teenage boys walking in a zoo togehter on a narrow pathway surrounded by trees. |
| Subtitle| Where do we go now, shall we see elephants first? But I heard that the gorilla exhibition is going to start soon, why don\'t we go see that first. whatever, we have plenty of time today. |
|Omni(by Vicuna-13b) | Two teenage boys walking in a zoo surrounded by trees and trees, with a turkey and various animal sounds in the background, discussing their next destination. |
| | In a zoo, teenage boys walk together on a narrow path surrounded by trees, while a turkey is heard, followed by coughing, wind, bird chirping, and animal sounds, and they discuss where to go next, considering seeing elephants or the gorilla exhibition. |
| Omni(by Chat-GPT) | As the teenage boys walked together on a narrow pathway surrounded by trees at the zoo, a turkey's call echoed in the background, followed by the sound of coughing, gentle wind, bird chirping, and a symphony of animal sounds. One of the boys remarked, "Where do we go now? Shall we see the elephants first? But I heard that the gorilla exhibition is going to start soon. Why don't we go see that first? Whatever we decide, we have plenty of time today." |
| | As the teenage boys strolled through the zoo surrounded by trees, the air was filled with the sound of a turkey, accompanied by the occasional cough, gentle wind rustling the leaves, the cheerful chirping of birds, and a variety of other animal sounds. Amidst their conversation, one of the boys pondered their next move, suggesting they start by visiting the elephants. However, another boy excitedly mentioned that the gorilla exhibition was about to begin and proposed going there first. Regardless of their decision, they reassured each other that they had plenty of time to explore the zoo today. |
**weakness 2**
+ Firstly, this work(VAST) has been done concurrently with LLava and MiniGPT-4. In addition we think that even though MLLMs can generate more detailed descriptions, but their caption accuracy may be not promised (such as the common hallucinations which is a general problems in those models), and inaccurate captions may have bad effects.
+ Secondly, the inference speed of those models are limited compared to trained captioners due to their extremely large hyperparameters.
+ Thirdly, there seems not be available audioLLMs for generating audio captions.
+ Fourthly, LLava and MiniGPT-4 use image-text data for pretraining which may not be appropriate for captioning videos that concentrate on both static appearances and dynamic motion informations.
Considering above reasons, we choose to train captioners on mixed datasets (both image and video data) to promise its caption quality.
**weakness 3**
The technical contributions of VAST have been illustrated in the global rebuttal [Common Question 1].
**question**
We promise that all of VAST-27M data (vision captions, audio captions and omni-modality captions), model checkpoints, training and testing codes of VAST will be public released to facilitate research of multimodal learning communities.
---
Rebuttal Comment 1.1:
Comment: The author's reply solved my concerns and I recognize the contributions of the paper, especially the dataset.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Thanks for the reconition of our work, and also the effort and time you have paid serving as a reviewer!
Author | Summary: The paper proposed to build a new multi-modal foundation model that includes vision, audio, subtitle and text modalities. The designed architecture of the model is very similar to previous works, such as BLIP. The paper also made another contribution, a new multi-modal video dataset VAST-27M. VAST-27M reuse videos from another dataset HD_VILA_100M. The paper uses off-the-shelf video captioning models and audio captioning models to generate text descriptions for the input video. Large language models are then applied to summarize the text descriptions together with the original subtitles of the video using customized prompts. The paper presents experimental results on 22 downstream datasets to demonstrate the effectiveness of the proposed approach.
Strengths: High-quality pre-training datasets that are openly accessible to the research community is very important for us to experiment with new research ideas and push the research boundaries. The new VAST-27M dataset could be very valuable for researchers working on related topics. The experimental results also demonstrate the advantages of the trained model.
Weaknesses: The following are more detailed comments and suggestions about the paper.
1, What is the difference between subtitles and text? The paper seems to treat them as different modalities. Also does the audio modality already contain all the information from subtitles?
2, Based on the implementation details in Section 5.1, the final model is initialized with other pretrained encoders, and pretrained with lots of other datasets as well. It is not clear how much the improvement is from the proposed VAST-27M dataset. It will be better if the paper can show the difference for with or without using the VAST-27M dataset. This can help us better understand the quality of VAST-27M.
3, How to select a subset of 27M videos from HD_VILA_100M? Please elaborate the details why the paper selected a subset from HD_VILA_100M.
4, In Line 60, why does the cross-attention layer only involve the text modality? Do we have cross-attention between video and audio?
5, The paper seems to use BLIP for vision captioning. Why not use BLIP-2, which can generate better captions?
6, The overall design of the learning objectives is very similar to BLIP, which also uses three losses, i.e., Image-Text Contrastive Loss (ITC), Image-Text Matching Loss (ITM), and Language Modeling Loss (LM). The novelty of the proposed learning framework seems to be very limited.
7, All the captions from VAST-27M are generated by models. Does the paper look into the bias of the used models and their generated captions? Will they limit the quality of the dataset?
8, A lot of details in the paper are similar to reference [7]. It is unclear what is the real contribution of the current submission. Many sections are very similar to [7]. I recommend clarifying the difference with [7], e.g., what are the contributions from the current submission, what has been done in [7].
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['Ethics review needed: Failure to comply with NeurIPS Code of Ethics (lack of required documentation, safeguards, disclosure, licenses, legal compliance)']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **weakness 1**
+ Subtitle is usually achieved by ASR or OCR techniques and can be a supplementary modality for video understanding, while captions are usually objective descriptions about objects existed or events happened in images or videos. Even though they are both represented as text, they have different data distribution and information density. In this work, the word "T"/"text" in VAST refers to "caption" which is a language output or retrieval query, while "S"/"subtitle" is an input modality. We use text(T) instead of caption(C) in "VAST" simply for getting a abbreviation that sounds better.
+ In this work, audio modality already contains all information from subtitles (ASR texts), but ablation study in Table 7 shows that explicitly taking subtitle as input achieves better results than taking only audio signal as input, and we attribute it to that current audio encoder doesn't possess strong speech reconigiton ablities and we think it's a valuable future research direction to perceive both environment sounds and human speech information from audio signal only.
**weakness 2**
Thanks for your suggestions. We conduct the following experiments to demonstrate the effectiness of VAST-27M. VAST-B is trained on same datasets with VAST but vision backbone replaced with CLIP-VIT-B/16 for fast training. Both two models are trained for same iteration and batch size. The results in following table shows that VAST-27M contributes a lot to VAST's high performance, without which the performances drops drastically on seven benchmarks.
| Model | MSR-ret | MSR-cap | MSR-qa | YouCook2-ret | YouCook2-cap | VALOR32K-ret | VALOR32K-cap |
|---|---|---|---|---|---|---|---|
| **VAST-B** | **56.9/79.8/87.1** | **74.2** | **48.2** | **36.3/57.7/67.3** | **186.7** | **70.0/90.2/94.7** | **54.6** |
| VAST-B (w/o VAST-27M) | 50.8/76.1/85.5 | 70.7 | 47.0 | 32.6/57.5/67.8 | 130.3 | 62.8/86.5/92.5 | 53.0 |
**weakness 3**
The specific selection standard has been illiustrated in the global rebuttal [Common Question 2]
**weakness 4**
In VAST, text encoder have taken the place of fusion interface for multimodal understanding and generation via cross attention. Text takes the role of query while single one or some combinations of vision, audio and subtitle take the role of key and value. We take this design mainly because most tasks we target at (captioning, QA, retrieval, etc.) can be uniformly modeled with language interface. In current version of VAST video or audio has not taken the role of fusion interface and we think is important to update VAST in future to handle more tasks like language-guide video/audio prediction/generation/edition and then it will be necessary to add cross attention layers in video /audio encoder and also take them as output interfaces. Thanks for your suggestions.
**weakness 5**
In fact, VAST uses self-trained vision captioner as illustrated in Section 3.1 (line128). We do not use open-source BLIP or BLIP-2 mainly because they are finetuned using image-text data (MSCOCO) only while the capabilities of describing dynamic events or actions are ignored. So we resort to trained a more general captioner that focus on both static or dynamic vision scenarios by finetuning on mixture of image/video captioning datasets, in order to generate more high-quality captions for VAST-27M.
**weakness 6**
About technique novelty has been elaborated in the global rebuttal [Common Question 1]. BLIP [6] is a image-language pretraining method aiming at bootstrapping model with trained captioner and filter. The differences between VAST and BLIP can be summarized as follows.
+ About data generation. The vision captioner for VAST-27M is inspired by BLIP, but we make several modifications to fit multi-modality pretraining tasks including 1) training vision captioner with the mixture of image and video caption datasets to increase captioner’s generality. 2) promoting from vision-language field to audio-language filed and trained a general audio captioner. 3) utilizing LLM to integrate multi-perspective captions.
+ About model architecture. Different from BLIP that contains a image encoder and text encoder, VAST contains additionally an audio encoder which supports audio input. In addition, vision encoder of VAST can support both image and video input, and text encoder can support both caption and subtitle input.
+ About training objectives. Image-text contrastive learning(ITC), matching(ITM), and image-conditional text generation are three most common training loss taken by current image-language pretraining models including BLIP. However, only relationship of image and text can be constructed through these losses, VAST have successfully modified them to omni-modality losses (OM-VCM, OM-VCC and OM-VCG as illustrated in Section 4.2), and use modality grouping loss illustrated in Section 4.3 to handle different modality combinations, which can not be realized with BLIP’s architecture and training objectives.
**weakness 7**
The dataset bias has been illiustrated in the global rebuttal [Common Question 3]
**weakness 8**
VALOR proposed a one-million level video dataset that connects text with both video and audio, and a vision-audio-text model. VAST has explicit differences and strengths compared with VALOR, which can be summarized as follows:
+ Firstly, VALOR-1M contains limited semantic concepts and cannot be friendly scaled up due to it's manual annotaion . By contrast, VAST27M is automatically generated that can be scaled up easily, and contains much more semantic concepts, which can be quantitatively reflected by experiments shown in Table 4,5,6 in the paper: models pretrained on VAST-27M outperform models trained on VALOR-1M on V-T, A-T and AV-T tasks.
+ Secondly, as illustrated in Common Question 1, VALOR stands for a series of works that models both vision and audio while VAST for the first time unifies all those modalities in one framework.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for answering my questions during rebuttal. Some of my questions have been clarified by the authors. However, I am still skeptical about weakness 4 and weakness 8. In weakness 4, ignoring the audio and video in cross attention is not well justified, as the paper claims to be Omni-Modality Foundation Models. For weakness 8, the contributions of the current submission are limited if we consider reference [7] as pre-exist works.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Thanks for your reply! Due to that in this work we mainly focus on cross-modality semantic understanding and generation tasks which take language as output interface such as retrieval, captioning and question answering, regarding model architecture design we simply take language as the query while vision, audio and subtitle as key for cross-attenion mechnism in text encoder. In the future work, we will improve the foudation model to additionally support visual/audio/audiovisual grounding tasks, like moment retrieval or text-conditioned highligt detection. To support those tasks, a cross attention inferface for audio or vision may be needed, or we can simply follow DETR to initailize a series of learnable tokens that concatenate with text tokens, and predict locations at the output of those learnable tokens. Thanks for your valuable advice!
In addition, our work has big differnece with VALOR. First, with regards to the dataset, VAST-27M is a **auto-generated** dataset which is **27 times larger than VALOR-1M**, which is a manual annotated dataset. With regards to the foundation model, as illustrated in the global rebuttal **[common question 1]**, VALOR can be categoried in the first class which models vision, language and **audio** jointly (e.g. AVLNet, i-Code, VALOR , ECLIPSE). There is another series of works that model vision,language and **subtitle** jointly (UniVL, MELTR). VAST for the first time **unifies both two series of works and support all of tasks that each of them can support**. In addition, VAST achieves 22 new state-of-the-arts results, and **surpasses VALOR on all tasks that VALOR has evaluated on**, with huge margins.
Best,
Author
---
Rebuttal 2:
Comment: Dear reviewer,
there is not so much time left for the discussion stage, if you still have questions about our work, please let us know and we will reply as soon as possible, thanks for your effort and time!
Author | Summary: This paper proposes a large-scale omni-modality video caption dataset based on an existing video-text dataset, namely VAST-27M. Specifically, the authors utilized the VL models to generate captions and LLM to generate omni-captions for videos. Then a vision-audio-subtitle-text model is trained for a range of video tasks and establishes SOTA in several benchmarks.
Strengths: The paper is clearly written and easy to read.
The idea of combining text, video, and audio in one caption is interesting and novel. I believe this will contribute to the multimodal learning community.
The performances over a wide range of video-language-audio tasks are strong.
Weaknesses: 1. Despite the dataset contribution, the technical contribution of the pre-training model is somewhat weak and lacks novelty.
2. I am confused by some details in the approach. (1) In Lines 185-186, why do you need to feed the caption tokens again instead of using the textual embeddings from the text encoder above? (2) In OM-VCG, do you use the masking strategy like BERT or generative training like GPT? Based on my understanding, you are adopting the former one, for which I think the name of “video caption generation loss” is improper, masking language modeling (MLM) may be better.
3. Although the experimental results are very strong, I am unsure where the performance gain comes from. Several points that might be attributed to this are not discussed. (1) Combination of different datasets. The model combines several different datasets, including VAST-27M while the dataset scale is not compared with SOTA methods. (2) Strong backbones. As indicated in Line 213, EVAClip-ViT-G is adopted as a visual backbone which is different from SOTA models, which makes the comparison in Table 3 unfair. (3) More modalities. Although Table 7 provides an extensive comparison of using different modalities, there are no results that could be directly compared in Table 3 for a fair comparison.
4. Some details are missing. (1) How do you select the 27M video clips from 100M clips in HD-VILA-100M? What is the selection standard? (2) Do you use audio in the HD-VILA-100M dataset for audio captioning and as input for the audio encoder? It is not clearly explained in the paper. (3) How do you sample frames in videos and how do you deal with video embedding from frame features in ViT (i.e., how to get f_v)? By Meanpooling?
5. Lack of discussion on related works. As an extension of HD-VILA, CLIP-ViP used an off-the-shelf image captioner to generate captions for the HD-VILA-100M dataset. The authors should also compare that one and your advantage over CLIP-ViP. It seems that only audio information is not used in the CLIP-ViP model.
6. It would be helpful to show some examples in the dataset for illustration.
7. Some typos: Row 1 in Table 5.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness part.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the scale limitation of the current dataset. It would be great if they could share more insights into the potential limitations of using their dataset and model, such as biases it could contain.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **weakness 1**
About the technical contributions of VAST model have been explained in our global rebuttal **[Common Question 1]**.
**weakness 2**
(1) Using origin captions in OM-VCM, OM-VCG and OM-VCC can keep consistence with BERT whose parameters are inherit for text encoder initialization. In addition, we have also tried use text embeddings of OM-VCC as input for text encoder used in OM-VCM and OM-VCG, but found performance slightly reduced.
(2) We assume that common generative modeling method can have two forms: generative modeling like GPT style (LM, input current word, output next word), and causal masked language modeling (CMLM, mask current token and predict it, but with only previous tokens as conditions). As stated in Section 4.2 (line196), VAST takes the latter one (CMLM) for the reason that in the early development stage we found that BERT+CMLM performs slightly better at most downstream benchmarks than GPT+LM. We name it OM-VCG from the target perspective that is to generate captions, and it can also be renamed as OM-CMLM.
**weakness 3**
(1) For the space limitation reason, we put more detailed comparison results including the dataset scale comparison among different SoTA methods in Appendix.
+ From the results in Table 4 and Table 5 (Appendix), we can find that VAST (442M data) outperforms CLIP-VIP(500M), Florence(900M) on text-to-video retrieval benchmarks with huge margins.
+ From the results in Table 6 (Appendix), we can find that VAST (442M) outperforms VideoCoCa(4.8B), GIT2(12.9B), Flamingo(2.3B) on video QA benchmarks with huge margins.
+ From the results in Table 7 (Appendix), we can find that VAST (442M) outperforms GIT2(12.9B), GIT(1.7B), MaMMUT(2B) on video caption benchmarks with huge margins.
(2) To get the best performance and demonstrate our methods function at the most advanced vision models, we use a strong vision backbone (EVACLIP-Giant) in the main paper, which is not consistent with other methods. To make fair comparison, we additionally trained a VAST-B model which use CLIP-VIT-B/16 as backbone and compare it to methods with the same backbone used. The results are shown in the following table which demonstrated that VAST can outperform both only-vision and multi-modal video-language pretraining models on three representative benchmarks.
| Model | Vision Backbone | MSRVTT-Ret(R@1/R@5/R@10) | MSRVTT-Cap(CIDEr) | MSRVTT-QA(Acc) |
|---|---|---|---|---|
| GIT2-B | CLIP-VIT-B/16 | - | 57.8 | 41.0 |
| mPLUG2-B | CLIP-VIT-B/16| 48.3/75.0/83.2 | 72.4 | 46.3 |
| UMT-B | CLIP-VIT-B/16| 51.0/76.5/84.2 | - | 44.9 |
| VALOR-B | CLIP-VIT-B/16| 43.0/72.2/82.1 | 66.6 | 46.7 |
| **VAST-B (Ours)** | CLIP-VIT-B/16| **56.9/79.8/87.1** | **74.2** | **48.2** |
(3) In the Table 4,5,6,7 of Appendix, a groups of methods utilizing multiple modalities beyond vision and text (audio, subtitle) are marked by gray background colors and we mainly compare VAST with those groups of methods for fair comparison. From the results we can find that VAST outperforms all not only single-modality but also multi-modality methods.
**weakness 4**
(1) The specific selection standard has been illiustrated in the global rebuttal **[Common Question 2]**
(2) Audios in HD-VILA-100M are not used in audio captioner training cause that the subtitle is ASR transcriptions rather than objective captions. Instead we use the combination of VALOR-1M and WavCaps datasets, as illustrated in Section 3.1 (line136). In addition, audios in VAST27M (sourced from HD-VILA-100M) is used as input for VAST model pretraining.
(3) As illustrated in section 5.1 (line219), during pretraining we sample only one frame for efficiency and during finetuning, we sample different frames (8~32 frames) for different benchmarks as shown in Table 3 in appendix. With regards to f_v, we do not conduct mean pooling but simply concatenate patch features of all frames together to reserve most information for multimodal encoding or decoding. The computation cost is affordable due to cross-attention mechanism (linear complexity for condition features) used in multimodal fusion instead of merge attention (quadratic complexity). We are sorry about not clarifying f_v, and will modify it in next version of paper, thanks for pointing it.
**weakness 5**
+ CLIP-VIP proposes to re-generate captions for HD_VILA_100M dataset through open-source caption model OFA, and use both raw subtitle and generated captions for video-language pretraining. Compared with CLIP-VIP, VAST has explicit differences and strengths which can be summarized as follows.
+ Firstly, OFA which is the captioner CLIP-VIP used is a image-text model which has not seen video caption data during training, while our video captioner is trained on mixture image-text and video-text corpus that can generate more motion-related video captions. In addition, their captions reflects visual contents only while captions in VAST-27M contains both vision, audio, subtitle descriptions and omni-modality caption, which is apparently more appropriate for multi-modality pretraining, given its importance as discussed in Common Question 1.
+ Secondly, CLIP-VIP has tried use together generated visual captions and raw subtitles and models both "V-T" and "V-S" relationships while VAST models VS-T correlations. This is because we think at most practical scenario we would like to put subtitle on the video input to help video content understanding, instead of predicting them which can be independently addressed by mature ASR algorithms. In addition, CLIP-VIP does cross-modality tasks (caption, QA, retrieval, etc.) only based on vision signal (1 signal), while VAST can utilize vision, audio and subtitle (3 signals).
**weakness 6**
Considering the limited space in main paper and many contents need to be clarifed, we put more examples in the Appendix (Page 9 B.3).
**weakness 7**
Thanks for pointing it, we will change "PT Data" to "Model" in the next version.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the authors' comments. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Thanks for the reconition of our work, and also the effort and time you have paid serving as a reviewer!
Author | Rebuttal 1:
Rebuttal: We sincerely thanks all reviewers for your recognition of our work VAST (dataset and foundation model), and very insightful reviews. Your reviews and suggestions are very important for us to improve VAST model and paper. We will add necessary detail descriptions and model comparisons in the next version of paper. With regards to rebuttal, we will firstly make an overview reply for some common questions proposed by at least two reviewers, and then make reply for others individually.
**[Common Question 1] About VAST model’s technique novelty and contribution**
VAST is the first onmi-modality pretraining model which aims at building cross-modality connections between vision, audio, subtitle and text, and supporting both single vision-language, multi-modal video-language and audio-language tasks. We think VAST’s innovations can be summarized as follows:
+ Under the scenarios that current multi-modal video-language models are somewhat specialist either additionally supporting audio modality beyond vision and language (i-Code [1], VALOR [2], etc) which can tackle audio-related tasks, or supporting subtitle modality (UniVL [3], MELTR [4],etc) that enhance the understanding of human speech, recipes or news, VAST is the **first** generalist foundation model that can support all vision (both image and video), audio, subtitle and text as input.
+ VAST have **firstly** soundly proven two things through extensive large-scale pretraining-finetuning ablation experiments. Firstly, utilizing all modalities (vision, audio, subtitle) can get consistent improvement over a very broad range of different types of modality-oriented downstream tasks (Table 7 line-j vs line-e). Secondly, the multi-modality enhancement functions more apparently with the help of large-scale pretraining (Table 7 line-j&line-e vs line-d&line-a), which demonstrated the meaning and necessities for omni-modality pretraining research.
+ VAST has achieved absolutely dominated SoTA performances on 22 cross-modality benchmarks (image-language, video-language and audio-language benchmarks) which is kept till now. We think this to some extent shows that compared to further promote VLP architecture or training objectives, bringing in more modalities is a much bigger cake, but how to enjoy this cake is relatively less researched. We hope that VAST can attracts more attention over the process of "steping further from conventional vision-language pretraining towards omni-modality pretraining", and we promise that all datasets, codes and model checkpoints will be publicly released to help communities for reproduction and further research.
**[Common Question 2] About data selection rule of VAST-27M from HD_VILA_100M.**
+ Videos in HD_VILA_100M [5] dataset covers a broad distribution of categories including music, gaming, sports, film, animals, education, entertainment, etc. After original processing in [5], each pair in the HD-VILA-100M consists of a video clip about 13.4 seconds on average and a sentence with 32.5 words on average. We choose to use a subset of HD_VILA_100M instead of using all of them mainly due to following three reasons: 1) the comprehensive consideration about the time and money cost of downloading, storage and computation. 2) those 100M clips originate from 3.3M long videos, which means that many clips are from the same video and share similar scenes. 3) many of them cannot be downloaded from YouTube.
+ In fact, we select the videos from HD_VILA_100M mainly according to following 4 rules: 1) we filtered out clips whose length are too short (less than 5 seconds) or too long (more than 30 seconds). 2) we filtered clips whose one or some modalities of vision, audio and subtitle are missing. 3) similar numbers of clips are sampled from every video in those 3.3M ones. 4) for each long video, the sampled clips are as disperse as possible to each other to ensure diversity and reduce redundancy. To the end, we got 27M video clips with an average duration of 10.0 seconds. The average lengths of vision, audio, and omni-modality captions in VAST-27M are 12.5, 7.2, and 32.4, respectively.
**[Common Question 3] About potential data bias of VAST-27M.**
+ Due to the reason that the data collection process of VAST-27M have used a large language model (Vicuna-13b), and the training process of video and audio captioners have utilized some open-sourced cross-modality corpus such as CC4M, CC12M, LAION, VALOR-1M and WavCaps, so VAST-27M and VAST model may suffer the same bias of those dataset and bias of Vicuna-13b.
[1] Yang et al. i-code: An integrative and composable multimodal learning framework.
[2] Chen et al. Valor: Vision-audio-language omni-perception pretraining model and dataset.
[3] Luo et al. Univl: A unified video and language pre-training model for multimodal understanding and generation.
[4] Ko et al. Meltr: Meta loss transformer for learning to fine-tune video foundation models.
[5] Xue et al.Advancing high-resolution video-language representation with large-scale video transcriptions. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Unleashing the Full Potential of Product Quantization for Large-Scale Image Retrieval | Accept (poster) | Summary: This paper presents a deep product quantization (PQ) method for approximate nearest neighbor (ANN) search. The motivation is to reduce performance degradation when short codes are applied to large-scale datasets with a large number of categories. The proposed method learns discriminative PQ subvectors by CosFace loss with pseudo-ground truth labels obtained by applying PQ to the set of mean vectors of all the classes. Experiments on Glint360K and ImageNet datasets show that the proposed method yields highly competitive or better performance than several existing ANN methods.
Strengths: Table 1 shows that when using 32-bit codes for the Glint360k dataset with a large number of categories, the performance of existing ANN methods is poor, and the proposed method has a significant advantage.
The proposed method achieves the same or better accuracy as the existing methods even for ImageNet with a small number of categories.
Weaknesses: A. Due to several ambiguities, I could not understand some important details about the experiment and wondered if the evaluation was done in a fair manner. Specific examples are listed below.
A1. Reading Section 5.1 I assume that Glint360K is used only for training and Mageface and Facecrub are used for testing. However, I find an explanation in line 260, "We will assess the retrieval performance using top-1, top-5, and top-20 accuracies on the Glint360K datasets," which is contradictory.
A2. Related to the above point, line 221 says that "for each individual, we have M images. We will test each image by incorporating it into the gallery of distractors and employing each of the other M-1 images as a probe.” Is there any reason to use such a tricky protocol? If the gallery and probe sets are Megaface and Facecrub, respectively, then both should contain images of the same individuals. So I don't see why they need to add images of Facecrub to Megaface. It seems to me that there is a concern that this protocol could introduce a dataset bias to some extent into the evaluation.
A3. I assume the "both datasets" in line 230 refers to Celeb-500k and MS1M-Retinaface, which comprise Glint360K, but it is hard to understand because there is no specific reference to them.
A4. Reading line 239, it appears that the same learning conditions are applied to all the methods. However, the appropriate learning conditions are generally different for different methods and have a significant impact on their accuracy, so I was concerned that this comparison is really fair.
A5. Is the feature preprocessing also applied to the baselines to be compared? Even if the performance of the proposed method does not change with or without preprocessing, the performance of the baseline to be compared may not. For fairness, comparisons with normalized features should also be reported.
A6. Sections 5.2 and 5.2.1 each describe learning conditions, and some of them, such as the learning rate, appear to be contradictory. Further clarification would be needed.
B. Some additional evaluations should be conducted to clarify the effectiveness of the proposed method.
B1. Looking at the ImageNet results (Tables 3 and 4) and the Glint360K results (Tables 1 and 2), the trend of the performance gap is very different. What causes the performance gap should be clarified. Specifically, although the proposed method does not always provide significant improvement over the baselines on ImageNet, it shows significant advantages on Glint360K when shorter codes are used, suggesting that the number of classes has a significant impact on the performance gap. If this is true, it should be interesting to analyze the performance gap for various number of classes.
B2. Another simple baseline to be compared would be a straightforward combination of fine-tuned feature extraction backbone + supervised quantization, i.e., first extract a deep feature of an image by using feature extraction backbone fine-tuned with a training data (e.g., Glint360K), and then use supervised quantization (e.g., [Wang et al., Supervised Quantization for Similarity Search, ICCV2016]) to encode the feature into a short code. Comparison to such an approach may clarify the effectiveness of the proposed method further.
C. The proposed method is not very novel, since many deep product quantization methods have been proposed in the past (e.g., [23, 28, 45, 49, 50] or [Jang et al., Generalized Product Quantization Network for Semi-supervised Image Retrieval, CVPR2020]).
D. Some minor problems
D1. How is the <||F_m||, C_mk> in Equation (9) is computed?
D2. The number of categories is large enough compared to previous standards in ANN searches, but the sample size assumed in this paper does not seem very appealing given that public datasets with 1B-scale samples (such as GIST1B) have been used since the early years.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I hope the authors will resolve all the ambiguities I listed in A1-A6.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: I could not find any discussion of the limitations of the proposed method. It might be a good idea to discuss the performance if there exist distribution shifts among the training, gallery, and probe sets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Retrieval performance.**
We evaluate the unseen retrieval performance on the Glint360k dataset: Glint360k is exclusively used for training, while Mageface and Facecrub are employed for testing.
**Q2. Evaluation protocol.**
In fact, the protocol we described is widely recognized and implemented across various studies. For more specific details, we would recommend referring to [Kemelmacher-Shlizerman, Ira, et al. "The megaface benchmark: 1 million faces for recognition at scale." CVPR 2016].
**Q3. "Both datasets".**
Thank you very much for pointing out the issues in line 203. Indeed, as you mentioned, "both datasets" refers to the clean Celeb-500k and MS1M-Retinaface datasets. We will make the necessary corrections accordingly..
**Q4. Same learning conditions.**
In fact, for these methods, we utilized the default settings provided by the Cisip project, if available, or relied on the publicly accessible projects provided by the original authors. Considering that these methods may not provide specific learning rate strategies for the datasets we test, it would be unfeasible to perform grid search or other methods to find the optimal learning rate strategy for the numerous methods we compared. On the other hand, these methods mostly are based on fine-tuning, using the default settings, such as an initial learning rate of 0.0001 and decreased to 0.00001, may not have a disruptive impact on the conclusions drawn.
**Q5. Feature preprocessing.**
For other methods, we incorporate their specific feature preprocessing techniques as defined by the respective authors. This includes applying feature normalization, which often results in improved performance, if the methods recommend or require it.
**Q6. Learning rate contradictory.**
Section 5.2.1 provides further details compared to Section 5.2. On the Glint360k dataset, the setup is as follows: as stated in the papers of the deep hashing methods we compare against, they utilize pretrained models to initialize network parameters and perform fine-tuning. As a result, these methods set their initial learning rate to 0.0001. However, our approach conducts training from scratch, utilizing only the PQ label provided by the pre-trained model. Therefore, our initial learning rate is set to 0.1.
**Q7. Performance comparison on various number of classes.**
Thank you. This is indeed an interesting issue. To assess the performance variations across different numbers of classes, we conducted the following experiment: we selected training data from Glint360k, which included the first 3k, 5k, 10k, 72,046 (20% of the total class number), 180,116 (50%), and 360,232 (100%) classes. It is crucial to highlight that in previous setting, we conducted unseen retrieval on Glint360k using Megaface and Facecrub datasets. Recognizing that the reduced diversity of training data may impact the performance of unseen retrieval, we conducted tests on the first 20W images from the Glint360k dataset to evaluate the learning performance of various methods. From each class, we extracted one image to construct the query set, totaling 2862 images, while the remaining data served as the gallery set. We provided the performance results for Top1 and Top20 in the Tabel 1(provided in pdf).
We can observe that as the number of classes increases, OrthoHash gradually weakens its ability. GreedyHash achieves it's optimal performance when the number of classes is small. Our method consistently achieves high performance. For greater clarity, we more provide a trend visualization in Figure 3(provided in pdf).
**Q8. Another baseline.**
Thanks you for your suggestion. Similar to PQ, SQ is also a non-deep hashing method. It aims to learn a transformation matrix that maps features into a discriminative subspace and utilizes CQ ([50]) for retrieval. However, as the source code for SQ is not available, we cannot guarantee an accurate reproduction within a limited timeframe. We will try to include this baseline in the future.
**Q9. The difference with other deep product quantization methods.**
As one of the most popular methods for quantization retrieval, there are indeed have many techniques aiming to improve the retrieval performance of PQ. However, it is worth noting that our method stands out by extending the applicability of deep hashing methods to large-scale datasets, which sets it apart from other methods. Moreover, in terms of methodology, our approach exhibits significant differences compared to others. DPQ ([23]) achieves self-learning of PQ codes by classifying the soft quantized features and employing two regularization terms. GPQ[Jang et al., Generalized Product Quantization Network for Semi-supervised Image Retrieval, CVPR2020] is similar to DPQ as it classifies soft quantized features and utilizes a metric learning strategy within subvectors. Similarly, PQN ([45]) constructs soft quantized features and learns based on triplet loss. PQM ([28]) utilizing quantized features for classification and optimizing quantization error through iterative updates. In comparison, our approach focuses on subspaces. For OPQN ([49]), it's codebook is predefined and limited by the feature dimensionality. CQ ([50]) is a non-deep feature quantization method.
**Q10. How is the <||F_m||, C_mk> in Equation (9) is computed.**
F_m represents the normalized result of the m-th sub-feature. C_mk refers to the k-th item of the m-th sub-codebook, where ||C_mk|| = 1. <•> can either be the cosine distance or the Euclidean distance. As the C_mk has constant feature norm, for F_m, regardless of whether or not the normalization is applyed, it always finds the same closest C_mk, in other words, assigned to same C_mk, denoted as C_mk*.
**Q11. About GIST1B**
GIST1B is a fixed feature representation set that cannot be learned. For deep hashing methods, they require adaptive learning to the features. Therefore, deep hashing methods usually do not include benchmark comparisons on GIST1B.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses. Several of my concerns have been resolved.
Q1-Q7. Now these points are clear to me. I would expect the authors to clearly explain and emphasize these points in their paper.
Q8. The proposed method (like other existing deep feature quantization methods) can be viewed as a joint optimization of features and codes. Comparisons with separate optimization approaches would be desirable.
Q9. I am still not fully convinced. As the authors mention, GPQ uses metric learning of subvectors, which is similar to the proposed method. The novelty (at least at a high level) is not yet clear to me. Due to this, the source of superiority of the proposed method also remains unclear.
Q10. The notation here is somewhat confusing to me. $||x||$ reminds me of a norm, so $<||x||, y>$ looks like an inner product of a scalar $||x||$ and a vector $y$, which is confusing. If the author wants to mean normalized $x$ by $||x||$, I would recommend the authors consider other alternatives such as $\bar{x}$.
Q11. I was not trying to say that GIST1B should be used for experiments. It just seems to me that it is not large enough to claim that it is large-scale.
---
Reply to Comment 1.1.1:
Title: Response to the comment of hk7b [1/2]
Comment: We appreciate your comments, and we will make further efforts to address your concerns.
**Q1-Q7.** Thank you, and we greatly appreciate your advice. We will make sure to provide clear explanations in our paper.
**Q8.** Thank you for your suggestion. We have made efforts to reproduce SQ[1] as much as possible, and the relevant code will also be released along with our method. In fact, SQ involves a W-step operation during its learning process, which requires performing matrix multiplication with a large-scale matrix (number of samples * number of classes). This makes SQ impractical for datasets of the scale of Glint360k. Therefore, we conducted experiments on the largest feasible dataset, ImageNet1K, using the 2048-dimensional features generated by a pre-trained ResNet50 (as per our original paper's setup). The experimental results are as follows, clearly demonstrating the advantages of our method over SQ.
| | ImageNet1K | mAP@1000 |
|:-----------:|:--------------:|:---------------:|
| Methods | 32-bit | 64-bit |
|SQ | 0.3004 | 0.3599 |
|Ours | **0.6207** | **0.6543** |
**Q9.** Indeed, as you mentioned, our method shares similarities with GPQ[2], such as we all learn a codebook. However, our method showcases distinct differences from GPQ in the following aspects:
1. GPQ utilizes softmax-based soft assignment of codewords to obtain quantized vectors, whereas our method does not involve this process.
2. GPQ does not incorporate a classifier for the complete feature; instead, it utilizes sub-vectors for category differentiation. In contrast, our approach includes a classifier for the complete feature, facilitating secondary retrieval against complete features.
3. The codebook in GPQ is mainly updated through soft assignment by a matrix $W_m$, while in our method, it is updated through direct loss backpropagation.
4. Our method learns a set of PQ labels to ensure the mapping of the same category to the same PQ code. In comparison, GPQ focuses on explicitly optimizing the similarity between quantized features and original features to reduce quantization error, using the N-pair Product Quantization loss.
In practice, GPQ may not be well-suited for learning on large-scale data. The process of updating the Codebook through soft assignment based on the matrix $W_m \in \mathbb{R}^(M \times D \times Class\\_num)$ , where $M$ is the number of feature segments and $D$ is the feature dimension, becomes challenging as the $Class\\_num$ increases. In our experiments on Glint360k using the GPQ code provided by the authors, we encountered significant difficulties due to the extremely slow learning process of GPQ. Training alone would take approximately 492 hours, making it impractical to proceed with the experiments. To address this issue, we made slight modifications to the training process and introduced a variant called GPQ*. In GPQ*, we accelerated the training process by reducing the frequency of codebook soft assignment based on $W_m$, updating the codebook only every 100 steps instead of every step. We provide a runtime and performance comparison between our method and GPQ under the same experimental conditions (iResnet18, 8 * 3090Ti, 20 epochs, batch-size 128 * 8, SGD optimizer, 32-bit). The results are presented in the table below.
|Method| Sample/sec | Training Time | Top1 | Top20 |
|:--------:|:--------------:|:-----------------:|:---------:| :-------:|
| GPQ | ~192 | ~ 492 hour | - | - |
|GPQ* | ~1948 | ~ 48 hour | 0.0618 | 0.2105 |
| Ours | **~6144** | **~ 15 hour** | **0.6945** | **0.8231** |
The experimental results clearly demonstrate the significant advantages of our method over GPQ in terms of both learning efficiency and performance. Additionally, it can be observed that GPQ* seems to have poor performance. One possible reason for this is that the soft assignment of the codebook based on $W_m$ plays a crucial role in GPQ. Another reason could be that GPQ, to embed semantic information in codewords, utilizes sub-features for comprehensive classification (e.g., 360,232 categories), resulting in GPQ utilizing only a fraction of the feature length, namely $D/M$. It is well-known that the dimensionality of features directly affects their expressiveness. While this setup may not significantly impact the performance on small-scale datasets, as the data scale and number of categories increase, learning becomes more challenging, and the limitations of GPQ become apparent.
**Q10.** Thank you very much for the suggestion. We will incorporate the change and use "$\overline{x}$" to represent the normalized x, instead of $||x||$.
---
Reply to Comment 1.1.2:
Title: Confirmation of our response
Comment: We would like to know if there are any further questions or suggestions. If our responses have addressed your concerns, would you consider raising the score? Once again, we appreciate your suggestions. | Summary: In this paper, a framework for large-scale image retrieval is proposed, which is based on deep hashing and product quantization (PQ). The key contribution of this framework is the alleviation of the performance crash problem that arises when using very short PQ codes to save space and computation. The performance of the proposed method is demonstrated through experiments conducted on multiple large-scale datasets with numerous classes.
Strengths: 1. This paper is well-organized and provides clear introductions to the details.
2. It explains the potential reasons behind the performance decline of short PQ codes and supports its claims through extensive experiments.
3. The proposed method is both novel and potentially practical, alleviating the problems of saving space and time in large-scale image retrieval.
Weaknesses: 1. Several specific experiments, such as "different network structures," "visualization of sub-features and clustering centers," and "compression efficiency," are conducted using the proposed method and traditional PQ alone. However, this comparison may be unfair as it overlooks other outstanding methods.
2. As the key contribution, it is important to provide a thorough comparison of the performance decline or crack between the proposed PQ method and other outstanding PQ-based methods when using shorter PQ codes.
3. Visual retrieval samples of Top-K should be provided for clarity.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Line 116 of Page 3: It is not clear whether "N"/"2^b" refers to "the class number of the dataset"/"256" based on the previous explanation (Line 113-115). Additionally, it should be noted that "N" is used to represent the batch size in Equation 2.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: More results.**
**Different network structures.** Thank you. Because of the limited time, we have supplemented the results of two outstanding methods, OrthoHash and GreedyHash, which have shown relatively good performance on iResnet50 in our paper. Specifically, we evaluated their performance on iResnet18 and iResnet100 as well (the experiment of OrthoHash with 64-bit on iResnet100 is still in progress, and we will provide the results later). The performance comparison is tabulated below. As can be observed, in comparison to these two excellent methods, our approach equally demonstrates substantial performance gains. Furthermore, all techniques exhibit performance improvements with an increase in network capabilities.
.
| | | | |iR18||||||
|:---:|---:|:---:|:---|---:|:---:|:---|---:|:---:|:---|
Method ||32 bits||| 64 bits |||128 bits
||Top-1| Top-5| Top-20 |Top-1 |Top-5| Top-20 |Top-1 |Top-5 |Top-20
PQ | 0.0973 | 0.1940 | 0.3031 | 0.4588 | 0.5958 | 0.6903 | 0.8005 | 0.8659 | 0.9025
OrthoHash | 0.0384 | 0.0528 | 0.0650 | 0.0960 | 0.1206 | 0.1421 | 0.2024 | 0.2426 | 0.2749
GreedyHash | 0.0991 | 0.1921 | 0.3014 | 0.4003 | 0.5507 | 0.6620 | 0.7344 | 0.8290 | 0.8824
Ours | **0.6945** | **0.7576** | **0.8006** | **0.7019** | **0.7890** | **0.8414** | **0.8231** | **0.8824** | **0.9133**
| | | | |iR100||||||
|:---:|---:|:---:|:---|---:|:---:|:---|---:|:---:|:---|
Method ||32 bits||| 64 bits |||128 bits
||Top-1| Top-5| Top-20 |Top-1 |Top-5| Top-20 |Top-1 |Top-5 |Top-20
PQ | 0.2074 |0.3569 |0.4941 |0.7380| 0.8378 |0.8878 |0.9465 |0.9658 |0.9720
OrthoHash | 0.4904 |0.5511 |0.5953 |- |- |- |0.7804| 0.8185 |0.8444
GreedyHash | 0.5902 |0.7271 |0.8035 |0.7475 |0.8381 |0.8861 | 0.8529 | 0.9135 |0.9415
Ours |**0.9305** | **0.9700** |**0.9734** |**0.9578** |**0.9728** |**0.9753** |**0.9679**| **0.9746** |**0.9772**
**Visualization of sub-features and clustering centers.** Other binary deep hashing methods lack the incorporation of sub-features and sub-clustering centers. Additionally, executing PQ-based deep hashing methods on the Glint360k dataset presents challenges, including convergence issues in methods like DPQ([23]) and computational complexity concerns in OPQN([49]), ADSVQ([51]), and DCDH([48]), and others. Consequently, we solely visualizing and comparing the results between PQ and our proposed method.
**Compression efficiency.** On the iResnet50 architecture, we provide a comprehensive visual comparison in Figure 2.
**Q2. Comparison of the performance decline.**
Thank you. The original paper's Table 1 illustrates the decline in performance of other deep hashing methods. Particularly, when using longer codes like 128 bits, these methods already face difficulties and exhibit lower retrieval performance in large-scale datasets. Moreover, they show a substantial decline in performance as the code length becomes shorter. For instance, OrthoHash's Top-1 accuracy decreases from 0.6626 at 128 bits to 0.5526 at 64 bits and further drops to 0.3098 at 32 bits. Similarly, GreedyHash's Top-1 accuracy decreases from 0.8259 at 128 bits to 0.6102 at 64 bits and further diminishes to 0.3688 at 32 bits. More lucid visualization can be find in Figure 2(provided in pdf).
**Q3. Top-K visualization.**
Thank you for your suggestion. In Figure 1(provided in pdf), we present visualizations of the Top-20 retrieval results on the Megaface and Facecrub datasets. This encompasses comparisons between various ways: 1) our method, which includes L2 retrieval and PQ4 retrieval (without feature processing, normalization, or segment normalization), and 2) the original L2 retrieval and naive PQ4 retrieval (with or without feature normalization). It can be observed that our method generates similar top-20 results under different feature preprocessing settings, further validating the insensitivity of our method to feature preprocessing. Additionally, due to the introduced noise from quantization, we can observe a low overlap rate of Top-20 results between the PQ4 retrieval result and the L2 retrieval result.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: Thanks the authors for their reply.
All of the issues have been modified, and I tend to change my original decision to Accept.
---
Reply to Comment 1.1.1:
Title: Thanks for the comment
Comment: Thank you, and we sincerely appreciate your valuable suggestions and positive feedback. | Summary: This paper presents a novel deep hashing framework based on product quantization. It is different from conventional PQ in learning a set of predefined PQ codes of the classes via a softmax-based differentiable PQ branch. The proposed method is validated to be effective on large scale datasets, including Glint360k.
Strengths: This paper tackles the hashing for large scale datasets with millions of categories and hundreds of millions of samples, which may extend the range that hashing methods can be applied for.
The codebook is learned via PQ branch rather than clustering with the predefined class-level PQ labels as supervision.
The proposed method can be used for retrieval as the traditional PQ but with a low-slope decay of retrieval performance with decreasing code lengths.
The proposed method is validated on large scale datasets to be effective.
Weaknesses: While the overall idea and implementation is simple, there are some implementation or adaptation details not clearly clarified. This includes the PQ code duplication removal in Sec. 4.2. There are also some adaptations when applying the method on the ImageNet100 and ImageNet1K datasets, for example using instance-level features and use OrthoHash to generate PQ labels. It is better to clearly specify the scenarios for different settings to avoid misunderstanding.
While the method section describes both symmetric and asymmetric retrieval, only the asymmetric results are reported.
Line 277, 'atrong' -> 'strong'. Line 284, 'we' -> 'We'.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In Fig. 2, the results for the proposed method are only shown for PQ 32 onwards. How about the results with PQ64, PQ128 and PQ256?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No. One limitation may be the setting of class-level PQ labels, which relies on a pre-trained model.
Flag For Ethics Review: ['Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Further detail.**
Thank you for your suggestion, now we provide further detail, and they will all be added to the supplementary material later.
**PQ code duplication removal.** We provide further explanations here. For the sake of clarity, if we need to modify $n$ items of PQ codes to achieve deduplication, we refer to it as "**n-replace**." Initially, we perform product quantization training on class average features to obtain PQ codes for each class, with each PQ code consisting of $M$ segments. In addressing a recurring PQ code $P$, we begin first attempt "1-replace." For every sub-vector $F_m$ associated with $P_m$, we compute its distance to the $K$ cluster centers of the m-th sub-codebook. This computation yields a total of $M \times K$ distances. We then proceed sequentially, seeking to substitute the PQ code with the index of the nearest cluster center, thereby achieving deduplication. If none of the $M \times K$ possibilities allow for substitution, we progress to the "2-replace". First, we forcibly apply the "1-replace" to substitute the PQ segment with the index $k^*$ of the nearest m-th sub-codebook, and we maintain this alteration. Following this, we iteratively apply the "1-replace" based on this condition. This iterative procedure is reiterated until a non-recurring PQ code is detected. In simple terms, we aim to find a replaceable PQ code with minimal item changes for repeated PQ codes while minimizing the quantization error as much as possible.
**Instance-level features to generate PQ labels.** Here, "instance" represents the sigle sample's features, not the class average features. In this setting, to avoid having insufficient samples for PQ cluster training, we utilize the instance-level features to train the PQ algorithm and obtain the codebook of PQ. This codebook will be used for encoding the class average features.
**PQ label generation in OrthoHash's approach.** OrthoHash hope to generate hash codes with a maximally expansive Hamming distance interval. This is achieved through a process of repetitive Bernoulli sampling. If a sampled hash code is found to be in proximity to the Hamming distance of a previously collected sample, it is discarded and the sampling is repeated. Subsequent to the generation of hash codes using the OrthoHash's Approach, we divide it into $M$ segments and transform them into PQ codes.
**Q2: Asymmetric results.**
We conducted experiments on symmetric retrieval, and as shown in the table below, our method still achieved excellent performance. We observed comparable retrieval performance among PQ4, PQ8, and PQ16, and the increase in code length does not have a positive impact on symmetric retrieval performance. We think this can be attributed to the heightened noise in symmetric retrieval resulting from the expansion of the number of PQ code segments.
|||Glint360k iResnet100||
:----:|:----:|:----:|:----:
||PQ4 |PQ8| PQ16
Top-1 | 0.9387 | 0.9343 | 0.9341
Top-5 | 0.9625 | 0.9609 | 0.9492
Top-20 | 0.9647 | 0.9649 | 0.9565
We also conducted experiments using different backbones, and the results are as follows. It can be observed that, similar to asymmetric retrieval, as the complexity of the model increases, the performance improves.
||iResnet100 |iResnet50 | iResnet18|
:----:|:----:|:----:|:----:
||PQ4 |PQ4| PQ4
Top-1 | 0.9387 | 0.9263 | 0.6175
Top-5 | 0.9625 | 0.9441 | 0.6524
Top-20 | 0.9647 | 0.9489 | 0.6868
**Q3: Minor sentence error.**
Thanks for pointing it out. We will correct it and review the paper carefully.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I thank the authors for the detailed response.
The response addresses part of my concerns, especially the implementation detail. However, the question I asked is not addressed. The results for PQ 64/128/256 are still unclear to me. Judging from the Fig.2 in the rebuttal, the proposed method is likely to perform similar to PQ or even worse for PQ 64/128/256. This makes the contribution limited.
---
Reply to Comment 1.1.1:
Title: Thanks for the comment
Comment: Thank you for the reminder, and we apologize for inadvertently overlooking your question while we were transferring the content from our Word document to the webpage of OpenReview.
Given that our integration of the PQ branch somewhat restricts the discriminative learning of features, our method may not exhibit improved performance compared to traditional PQ for PQ64, PQ128, and PQ256. However, it is important to note that Fig. 2 in the rebuttal demonstrates that the performance of traditional PQ64 already closely resembles that of L2 brute-force retrieval. This can be attributed to the already very small quantization error of traditional PQ, allowing only limited potential for improvement. For instance, when employing PQ64, PQ128, or PQ256, the 512-dimensional feature is divided into 64/128/256 segments, respectively, with the use of 256 cluster centers to partition the 8/4/2-dimensional subspace (512/(64/128/256) = 8/4/2). Accordingly, we consider the traditional PQ to be sufficient when the subspace dimension is small. | Summary: This paper discusses a new deep hashing algorithm based on product quantization, which effectively addresses the issues of high computational cost and low accuracy. The algorithm successfully learns predefined PQ codes for different classes, achieving concise, efficient, and distinguishable codes. It has been validated on multiple large-scale datasets, including ImageNet100, ImageNet1K, and Glint360k, and has shown significant improvements.
Strengths: It addresses the issues of applying deep hashing to large-scale data by providing an easy-to-implement method that does not involve large-scale matrix operations.
Weaknesses: Training with the combination of two branches may increase the complexity and training time of the model. Additionally, the performance of the method depends on the selection of predefined PQ class labels, which is not extensively discussed in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Does the method rely on a specific data distribution or dataset size? Is it applicable to different types of datasets or larger-scale datasets?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It is not clear whether this framework is applicable to non-visual tasks or non-image datasets as the validation is conducted on image datasets. Furthermore, the performance of this method on larger-scale datasets, such as those exceeding the size of the existing validation sets, is not explicitly explained.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
**Q1:The increase of complexity and training time.**
In fact, the number of parameters in the PQ branch is very small. Let $M$ denote the number of segments in PQ, $D$ denote the dimension of the embedded features, and $K$ denote the number of cluster centers. The number of parameters in the PQ branch is given by $M × (D/M) × K = D × K$, and the FLOPs (Floating Point Operations) is calculated as $M × (D/M + 1) × K = (D + M) × K$, which includes $D × K$ multiplications and $M × K$ additions. We have listed the specific computations and number of parameters in the following table. We assumed a relatively long code that we need to learn, where $M$ is set to 32, corresponding to learning 32 × 8 = 256-bit PQ codes, and $D$ is set to 512 with 256 cluster centers per PQ segment. As can be seen from the table below, the number of parameters and computations (FLOPs) of the PQ branch is very small compared to the backbone. It hardly increases the complexity and training time of the model.
| | FLOPs | | Parameters | |
|--------|-------------|-----------|---|------|
| |Backbone |PQ-Branch | Backbone |PQ-Branch |
| iResnet18 |2634.12M |0.14M |24.03M |0.13M |
| iResnet50 |6346.83M |0.14M |43.59M |0.13M |
| iResnet100 |12149.32M |0.14M |65.16M |0.13M |
**Q2: The select of the PQ label.**
Our method seem is not particularly sensitive to the selection of PQ labels. We provide three pieces of evidence to support this claim:
1)In our experiments on ImageNet1K, as ImageNet1K has just 1000 classes, PQ could not be effectively trained due to the small number of categories. Moreover, the low discriminability between clAverage featureasses in the features generated by a pretrained model leads to a significant amount of duplicate PQ codes after PQ training and encoding. Therefore, instead of training on class average feature to generate PQ label, we utilize the OrthoHash's method to generate hash codes, whereby the hash codes are assigned randomly. These hash codes are then segmented and converted into decimal PQ code labels. The experimental results in Table 4 of the original paper demonstrate that our method also achieves good retrieval results.
2)Another experiment. We conducted involved using PQ4 labels generated by iResnet18 and iResnet100 to guide the training of iResnet50. The results of this experiment are presented in the table below. It can be observed that the retrieval performance in these settings is relatively consistent with the original setting. Note that using PQ codes generated by iResnet18 as labels resulted in better Top-1 performance compared to using PQ labels generated by iResnet100 and iResnet50. This observation aligns with the findings in Table 2 of the original paper, suggesting that PQ4 may have reached a performance bottleneck on the Glint360k dataset.
|||Glint360k iResnet50||
|:-------------------:|:---------:|:----------:|:-----------:|
| Label | iR18-PQ4 | iR50-PQ4 | iR100-PQ4 |
Top-1 | 0.9402 | 0.9342 | 0.9346
Top-5 | 0.9611 | 0.9601 | 0.9609
Top-20 | 0.9658 | 0.9652 | 0.9668
3)We also conducted an additional experiment to further discuss the choice of PQ labels. Actually, it is a part of our future work. In this experiment, we aimed to move away from predefined PQ codes and instead utilized the weights of the model's fully connected layer as class prototypes to iteratively construct PQ codes during the training process. The obtained experimental results, shown below, demonstrate similar performance compared to the current methods.
|||Glint360k iResnet50||
|:-------------------:|:---------:|:----------:|:-----------:|
| | PQ4 |PQ8 | PQ16 |
Top-1 | 0.9369 | 0.9460 | 0.9682
Top-5 | 0.9628 | 0.9702 | 0.9738
Top-20 | 0.9676| 0.9723 | 0.9760
**Q3: Rely on a specific data distribution or dataset size.**
In terms of dataset size, our method has been validated on multiple large-scale datasets, including ImageNet100, ImageNet1K, and Glint360k. These datasets encompass a wide range of category sizes, ranging from 100 to 360k, and sample numbers ranging from 10k to 17 million, respectively. It is noteworthy that the Glint360k dataset utilized in our experiments is currently the largest dataset verified amongst all deep hashing methods, to the best of our knowledge. In terms of data distribution, the datasets used in our experiments consist of both face datasets and general object recognition dataset. Additional validation on even larger datasets, such as ImageNet21k and webface260k, will be left for our future work.
---
Rebuttal Comment 1.1:
Title: A answer to the limitations question
Comment: We appreciate your suggestions and we apologize for overlooking the limitations question while transferring the content from our Word document to the OpenReview webpage. We now to answer the question.
**Q4: Whether this framework is applicable to non-visual tasks or non-image datasets.**
Thank you. Similar to some previous deep hash methods, this work also focuses on the visual domain. However, for the task of Approximate Nearest Neighbors Search (ANNs), we believe that the primary objective is to find a suitable representation for the data. In the future, we will validate our method on non-visual tasks as much as possible. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable comments. Reviewers kMZn, WmwN, and AU32 have mentioned that the method proposed in this paper is novel(new) and have affirmed its potential applicability. Reviewers hk7b also acknowledged the contributions of our method.
The PDF file contains Figure 1, Figure 2, Figure 3, and Table 1.
Moving forward, we will address the questions raised by the reviewers point by point.
Pdf: /pdf/6f940e020217079a6db2b3445b5c1790fd2d853d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing | Accept (poster) | Summary: This paper has thoroughly analyzed the activation outlier problem during quantizing of the transformer model, and further indicates the limitation of existing works. Based on the comprehensive studies, the author proposes two components, namely clipped softmax and gated attention, for regularizing the activation outlier during training. Sufficient experiment results demonstrate the effectiveness and efficiency of the proposed method.
Strengths: $\cdot$ The proposed components, namely clipped softmax and gated attention are simple but reasonable. Further experiments also support their effectiveness.
$\cdot$ The observation is solid and further analysis is practical. The experiments include different structures of the transformer model in various tasks.
$\cdot$ I appreciate the comprehensive studies conducted by the authors. The experiment result of computing cost is convincing.
$\cdot$ This paper is well-organized and clear to understand. And The authors provide detailed information on related methods, making it easy for readers to understand the progress of the field.
Weaknesses: $\cdot$ The quantization performance of the proposed method at a lower bit (e.g. 4W4A) is worth looking forward to.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: $\cdot$ The avg. kurtosis of OPT model with clipped softmax in tab. 2 is strange, what is the possible reason?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: $\cdot$ The author has claimed in L290-296. I encourage the authors to finish the experiment in the larger model.
$\cdot$ I encourage the authors to release the code for reproducing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness
See general response to all reviewers.
## Question
At the moment, we do not have an explanation of why this is the case but will further investigate this issue.
## Limitation 1
See general response to all reviewers.
## Limitation 2
We agree that this will greatly improve the reproducibility and accessibility of our work. We intend to release the code with the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: After reading all reviews and replys, I'd like to keep my score.
---
Reply to Comment 1.1.1:
Comment: Thanks! | Summary: The paper proposes two architectural modifications that mitigate the activation outlier problem that makes transformers challenging to quantize. Particularly, the authors conduct a comprehensive analysis of the underlying causes of outlier issues in various pre-trained Transformer models across different tasks and datasets. They identify specific architectural limitations that lead to large outliers during the pre-training phase. To tackle this challenge, the authors propose two solutions: 1/ clipped softmax and 2/ gated attention that can effectively address the outlier problem without sacrificing model performance when compared to vanilla Transformer models across various use cases. Furthermore, the suggested modifications result in smaller outlier values and enhance the quantization performance of the Transformer models, making them more robust to quantization.
Strengths: 1. The paper presents strong motivations and novel approaches to address outlier issues that distinguish it from prior works. Unlike existing methods that focus on circumventing the impact through post-training techniques, this paper proposes a fundamental training-time solution.
2. The paper proposes a simple yet highly effective methodology, which directly addresses the fundamental issues and observations.
Weaknesses: 1. Evaluating the proposed methodology by comparing it solely to the naive application of (vanilla) PTQ on the plain Transformer architecture might be somewhat unfair and not informative enough. Various outlier suppression methods have been proposed [] as post-training solutions, and thus it would be better to include a comparison with these methods. Given that the proposed methodology requires from-scratch training, its practical value will be justified if it outperforms post-training methods that require no additional training.
2. The paper introduces two distinct solutions; however, they are two orthogonal/competing methodologies that tackle the same problem. The paper lacks clarity regarding the specific circumstances under which each solution should be employed. That said, how can we decide which scheme to use when training a new model on a new dataset without having to try both and choose the better one?
3. As also stated in the paper, the evaluations have only been conducted on smaller-scale models (100M parameters). While the proposed architectural and train-time modifications show promising results that match/outperform the vanilla Transformer performance, it is unclear if this trend can persist on a larger scale (hundreds of millions to billions of parameters). It would be more informative if the authors can provide across different model scale regimes. Moreover, since the proposed solution is a pre-training methodology, it requires training from scratch and additional risk/cost related to extra hyperparameter tuning, which can potentially limit their practical application.
[1] Wei, Xiuying, et al. "Outlier suppression: Pushing the limit of low-bit transformer language models." Advances in Neural Information Processing Systems 35 (2022): 17402-17414.
[2] Xiao, Guangxuan, et al. "Smoothquant: Accurate and efficient post-training quantization for large language models." arXiv preprint arXiv:2211.10438 (2022).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please see the weakness section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness 1
Indeed, in most of real-world applications the vanilla PTQ might not be good enough.
At the same time, our proposed methodology is complementary and can be combined with most of more advanced PTQ and weight compression techniques ([1],[2] as well as [3-6] and many more).
Our main motivation of choosing this PTQ baseline was to show how effective our methods are at recovering the quantized network performance even with such a naive and minimal effort PTQ pipeline.
Please also note that, as stated in the main text, in most of these works, they keep certain parts of the network (often including the problematic residual connections after FFN, LayerNorm etc.) in FP16. On the contrary, we quantize all weigths and activations, including the problematic input, output and residual connections in FFN.
References:
* [3] Frantar, E., Ashkboos, S., Hoefler, T., & Alistarh, D. (2022). Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323.
* [4] Lin, J., Tang, J., Tang, H., Yang, S., Dang, X., & Han, S. (2023). AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration. arXiv preprint arXiv:2306.00978.
* [5] Nagel, M., Amjad, R. A., Van Baalen, M., Louizos, C., & Blankevoort, T. (2020, November). Up or down? adaptive rounding for post-training quantization. In International Conference on Machine Learning (pp. 7197-7206). PMLR.
* [6] Li, Y., Gong, R., Tan, X., Yang, Y., Hu, P., Zhang, Q., ... & Gu, S. (2021). Brecq: Pushing the limit of post-training quantization by block reconstruction. arXiv preprint arXiv:2102.05426.
## Weakness 2
This is a good point!
* Given the fact that clipped softmax did not work for OPT, for the moment we would recommend starting with gated attention (that worked consistently well on all tested models so far) approach but also exploring clipped softmax for new models.
* On top of that, we expect to further bring more clarity to this question by exploring bigger-scale models.
## Weakness 3
We agree on risks regarding the hyper parameter tuning and with all the other points.
Please, see general response to all reviewers.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for their thoughtful response. I will keep my rating and stand for acceptance.
---
Reply to Comment 1.1.1:
Comment: Thanks! | Summary: In this paper, the authors show that strong outliers are related to very specific behavior of attention heads that try to learn a “no-op” or just a partial update of the residual. To reduce outliers, the authors propose two simple (independent) modifications to the attention mechanism. This enables them to quantize transformers to full INT8 quantization of the activations without any additional effort.
Strengths: 1. The authors provide a detailed analysis of the outliers in transformers, which well clarifies their research motivation and provides insights.
2. The analysis of "strong outliers are related to attention heads try to learn no-op" is interesting.
Weaknesses: 1. The authors do not give detailed data distribution after training with the proposed method. I would expect a reduction in the magnitude and number of outliers than in the performance of INT8 quantization that validates the main target of the proposed technique.
2. The authors mention that it is limited to small models only (125M, which is too small), but quantizing small models with less than 2.7B parameters to INT8 is less challenging compared with large models [1]. There exists concern is the generalization ability of the proposed method on large-scale models. Analysis or experiments that demonstrate the effectiveness of proposed technique on larger scale model is recommended.
[1] LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale. NIPS'22
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Could the proposed method be applied to lower bitwidths beyond INT8? (e.g., 4/6 bit)
2. Does the proposed method require a full retraining of the FP model? When applied to larger models, the training cost could be excessive.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not include the limitations and potential negative societal impact in their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness 1
Please note, that in all result tables we report next to the floating-point and quantized perplexity/accuracy also two metrics that quantify the magnitude and the frequency of the outliers:
1) Maximum infinity norm - measures the magnitude of the outliers (by definition).
2) Kurtosis - the (empirical) fourth standardized moment - measures the tailedness of a distribution, which is directly related to the frequency of the outliers in the distribution (the heavier the tails of the distribution - the higher the kurtosis and vice versa).
And as we can see from Table 2, in almost all cases, both of our proposed techniques significantly reduce both the maximum infinity norm (the magnitude) and the kurtosis (the frequency) of the outliers.
## Weakness 2
See general response to all reviewers.
## Question 1
See general response to all reviewers.
## Question 2
* Indeed, the idea is that the model is pre-trained from scratch using one of our proposed techniques instead of the vanilla softmax which result in a way more quantization-friendly model. We hope that gated attention and/or clipped softmax will be picked up for future LLM generation such that models are readily quantizable.
* However, we agree that this could lead to excessive training costs when applied to very large models.
* To address this, we explored whether fine-tuning using Gated attention can still lead to improved performance and decreased outliers for larger models.
* Setup:
- we used OPT-1.3B pre-trained checkpoint from HuggingFace and fine-tuned it on Bookcorpus + Wikipedia for 4000 steps with batch size 256, maximum sequence length 512, LR = 1e-5, and linear LR schedule with 400 warmup steps (we use the same LR for both model parameters and gating module parameters) and the rest of hyper-parameters are the same as for our pre-training setup.
- we adapted our gating approach as follows: we used b_init = 0 which results in the expected initial gating probability output of pi_init = 0.5. We multiply the gating probability by 2.0 so that at initialization this value is 1 and approximately resembles the model with vanilla softmax at the start of fine-tuning.
- we add a small activation regularization term (at the output of each FFN) to further encourage the reduction in the magnitude of activations (as unlike when training from scratch outliers are already present in the pretrained model and need to be suppressed).
* We show results in Table 2 in the attached PDF. As we can see, fine-tuning with our proposed Gated attention results in a better perplexity and also reduced maximum infinity norm and the average kurtosis compared to fine-tuning with vanilla softmax.
## Limitations
Please note that in the discussion (Section 6), we mention some limitations and a few points regarding the potential societal impact. In case we missed any limitation or potential negative societal impact, we are happy to update the section based on the reviewer’s suggestion(s).
---
Rebuttal Comment 1.1:
Title: Thanks for the authors' response
Comment: Thanks for the authors' response. I keep my rating that recommends accept.
---
Reply to Comment 1.1.1:
Comment: Thanks! | Summary: This paper analyzes the activation outlier problem. It is shown that the outliers are related to the behavior of transformer networks trying to learn not to update residuals (no-op). To achieve the exact zeros needed in the attention matrix for a no-update, the input to the softmax is pushed to be larger and larger. To solve this problem, the authors propose clipped softmax method and gated attention method, which can introduce exact zeros for softmax, thus the outliers can be removed.
Strengths: The paper explains why outliers exist in transformers. The analysis and visualization are convincing.
Two independent methods are proposed (clipped softmax and gated attention) to solve the outlier problem of transformers, so that models can be quantized easily.
Experiments on language models (BERT, OPT) and vision transformers are provided.
Weaknesses: No results on large scale models and datasets are provided.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How to explain the clipped softmax method no better than vanilla network on OPT?
Can you provide the visualization (Figure 2) after using the proposed methods?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness 1
See general response to all reviewers.
## Question 1
At the moment, we do not have an explanation of why this is the case but will further investigate this issue.
## Question 2
Thanks for the suggestion. We investigated this and included the visualization, please see Figure 1 in the attached PDF.
As we can see, both methods can represent a partial/soft no-op behavior, but in case of our methods this does not require strong outliers elsewhere in the network:
* In the case of clipped softmax, the attention probabilities are generally more diffused and smaller in magnitude (which comes from the clipping).
* In the case of gated attention, the output of softmax is also quite different since the update of the hidden representation is now further modulated by the gating probabilities.
Note that we found similar patterns in multiple attention heads, but the exact head indices where we observed such patterns depend on random initialization.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the reply. Figure 1 in the attached PDF uses different head and sequence for clipped softmax and vanilla softmax. I suggest to use the same setting for visualization, even if the visualization results are not as expected, it will help understanding the method.
---
Reply to Comment 1.1.1:
Comment: Thanks for the comment.
Initially, we looked at and analyzed the same head and same sequence but decided to showcase a different head and a sequence with a similar attention pattern of soft no-update/partial update (we would show both if not the 1-page limit for the attached PDF).
Such a pattern was not as pronounced as was the case presented in Figure 2 in the main text, but it's also not surprising (because of different random initialization & training dynamics it doesn't have to be the same exact head).
However, we agree that including the same head & sequence is also valuable and we include both and possibly a few more for the camera-ready version. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful and positive feedback!
We are encouraged they found our work is well-organized and easy to follow (Akrm), has comprehensive experiments (sWjn, Akrm), solid and insightful analysis (sWjn, Vxgc ,Akrm), and presents a simple yet effective methodology, which directly addresses the observations and fundamental issues of outliers (NwNc, Akrm).
One common concern (all reviewers) is the question of the scalability of our methods to larger models, including LLMs.
* First, we would like to clarify that our methods are not limited to the smaller-scale models and we expect them to translate to larger models/LLMs. We did not include the results for larger models purely because of compute constraints.
* We acknowledge that this is a very important question and that showing that our methods remain effective at a larger scale will greatly improve the likelihood of the adoption of our methods.
* We intend to include the experiments on bigger models with the camera-ready version (up to 1.5B model size).
Several reviewers (Vxgc, Akrm) have also asked if the proposed method be applied to lower bitwidths beyond INT8 (e.g., 4/6 bits)?
* Indeed, the proposed methods can be applied to lower bitwidths and can also be combined with other more advanced PTQ techniques.
* Please, see Table 1 in the attached PDF the results for our proposed methods applied to BERT-base and quantized to different bitwidths using our simple PTQ setup. Unless stated otherwise, for low-bit (<8-bit) weights and activations we use MSE range estimation as recommended by [1,2] since it gives better results.
* As we can see, in all cases both of our methods significantly improve the perplexity compared to the vanilla softmax pre-training.
* We also notice that generally the performance progressively degrades as we decrease the bitwidths, which is as expected. Achieving good results with low-bit activation quantization in general is a challenging problem to this day.
* Finally, we notice that the perplexity of the vanilla model significantly improves whenever we consider a low-bit weight quantization with MSE ranges compared to the INT8 case. This can be explained by the fact that using MSE range estimation for weights leads to an implicit clipping of activations (in the same and all subsequent layers in the network), which happen to be of the right amount so that it doesn't hurt the perplexity. We found that by going from W8A8 to W6A8 the average kurtosis is reduced from 3406±547 to 631±94 and the maximum infinity norm is reduced from 577±80 to 158±40. However, in all cases the resulting model still has significantly larger outliers and a worse performance than both of our proposed methods.
References:
* [1] Ron Banner, Yury Nahshan, Elad Hoffer, and Daniel Soudry. 2018. Post-training 4-bit quantization of convolution networks for rapid-deployment. arXiv preprint arXiv:1810.05723
* [2] Yoni Choukroun, Eli Kravchik, Fan Yang, and Pavel Kisilev. 2019. Low-bit quantization of neural networks for efficient inference. In ICCV Workshops, pages 3009–3018.
Pdf: /pdf/aeaad706431d4fd7f9e7f0fbf0fe715843d5805e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DreamSparse: Escaping from Plato’s Cave with 2D Diffusion Model Given Sparse Views | Accept (poster) | Summary: This paper introduces a framework of 3D reconstruction from sparse using 2D image priors from pretrained diffusion models. It proposes a 3D geometry module which extracts 3D features from 2D images, and then incorporates these features into the diffusion process at novel views, enabling 3D awareness and view consistency. Experimentally, it shows good reconstruction results both quantitatively and qualitatively, with generalization to unseen object categories and (object-centric) scenes with complex backgrounds.
Strengths: - The paper proposes a 3D geometry module which incorporates 3D awareness into the 2D diffusion process. Instead of using NeRF as the 3D representation as in most 2D-to-3D generation works, this paper adopts the intuitions and techniques from image-based rendering or lightfield rendering methods, which I believe, is a very interesting and novel point.
- It shows good experimental results both quantitatively and qualitatively compared to existing SoTA sparse-view methods. Adequate ablation studies and visualizations are provided to show the effectiveness of each module in the framework.
Weaknesses: - Section 4.3.2 naming the task "Scene Level Novel View Synthesis" sounds a bit overclaimed to me. Though I agree that compared to pure object settings with foreground masks this setting with detailed backgrounds is more difficult, many scene-level works deal with rather complex scenes with multiple objects and complex configurations (like indoor environments, DTU dataset, etc.). So I would suggest a change of naming here.
- It seems that input views are not shown except for Figure 4 -- it would be better to also show the input views side-by-side to the novel-view synthesis results, which can give a clearer sense of how much hallucination the model is performing.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - I'm very interested in the style transfer application. However, I've looked at the website and only found static results of individual views. Is this style transfer multiview consistent (or in other words, 3D aware)? I would like to see videos of consistent novel-view renderings, similar to the "Single Image Scene-level Novel View Synthesis Results". If there's no multiview consistency then it would seem to be a straightforward combination of novel-view synthesis and single-image style transfer.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations and potential social impacts are well-discussed in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and effort sharing critical feedback regarding our work. We have addressed your points and questions below.
>Section 4.3.2 naming the task "Scene Level Novel View Synthesis" sounds a bit overclaimed to me. Though I agree that compared to pure object settings with foreground masks this setting with detailed backgrounds is more difficult, many scene-level works deal with rather complex scenes with multiple objects and complex configurations (like indoor environments, DTU dataset, etc.). So I would suggest a change of naming here.
Thanks for your feedback. We would like to rephrase the task as “Object Centric Scene Level Novel View Synthesis” in the final version following your suggestions.
>It seems that input views are not shown except for Figure 4 -- it would be better to also show the input views side-by-side to the novel-view synthesis results, which can give a clearer sense of how much hallucination the model is performing.
We have updated the website to reflect the input images for better visualization of novel view synthesis capabilities from the single view. (Please refer to the link in Abstract and the main paper).
> I'm very interested in the style transfer application. However, I've looked at the website and only found static results of individual views. Is this style transfer multiview consistent (or in other words, 3D aware)? I would like to see videos of consistent novel-view renderings, similar to the "Single Image Scene-level Novel View Synthesis Results". If there's no multiview consistency then it would seem to be a straightforward combination of novel-view synthesis and single-image style transfer.
Unlike prior work which often requires either the replacement of textual control capabilities or the fine-tuning of a pre-trained diffusion model, our paper is the first to demonstrate that it's feasible to utilize a frozen pre-trained diffusion model for novel view synthesis while simultaneously retaining the textual control ability. Moreover, while it's feasible to guarantee editing consistency via test-time per-object distillation like InstructNeRF2NeRF, our study intentionally centers on novel view synthesis devoid of this distillation. Given this constraint, ensuring editing consistency poses significant challenges and falls outside the purview of our current paper. We believe this is an interesting and practical direction, and we would like to introduce more conditions to enhance consistency throughout the editing process without test-time per-object distillation.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply! My concerns are mostly well-resolved, so I'll keep my positive rating. | Summary: This work proposes an approach for finetuning a pretrained 2D diffusion model for novel view synthesis given a few or only a single input image at test time.
The two key contributions are a 3D geometry module and a spatial guidance module. The geometry module fuses features from multiple input views. The spatial guidance module then integrates the features into the pretrained 2D diffusion model.
Strengths: Leveraging strong 2D priors for novel view synthesis is an interesting approach that is currently a topic of great interest in the community. By building on pretrained text-to-image 2D diffusion models the method naturally allows for textual control.
Overall, the approach is well-explained and the quantitative evaluation indicates a substantial improvement over the considered baselines.
Weaknesses: The comparison to existing works is not clear enough and incomplete:
- It is not clear why there is no comparison to zero-123 (code available), NVS-Fusion (no code available but evaluated on overlapping datasets, CO3D and ShapeNet, so the experimental setting can be reproduced), and 3DIM (no code available but the experimental setting can be reproduced e.g. on SRN cars and chairs)
These methods are also missing from Table 1 and it is not clear why.
- for the related work section on geometry-based nvs methods and sparse 3d reconstruction: how does the proposed approach differ from the existing approaches and where is it similar?
The paper leaves substantial claims unsupported:
- L.14 “enabling it (the approach) to generate geometrically consistent images” - there are no qualitative or quantitative results reported that evaluate the geometric consistency of the results.
- L.78 “The ability to synthesize high-quality and even scene-level images” for which results are shown on the fire hydrant category of CO3D. While these images contain background these are not scene-level images but still object centric, single category images and these results do not demonstrate the ability to generate entire scenes.
- L.7 “2D diffusion models, nevertheless, lack 3D awareness, leading to distorted image synthesis and compromising the identity”. This is never shown in the paper but is a central aspect of the story.
The motivation for designing the geometry module is not entirely clear. Why not simply use PixelNeRF and its projected features? It is not clear why a new architecture is needed here and that takes away from the contribution of adding the geometry module. Given that the spatial guidance module appears to be very similar to ControlNet, the technical contribution of this work seems to be rather small.
The qualitative evaluation is very sparse, only a few handpicked results are shown and there are no additional qualitative results in the supplementary and also no videos that would give an indication of the consistency of the method.
The writing of the paper should be improved. There are multiple grammatical mistakes and spelling errors, as well as inconsistencies in using terms like “geometry module” and “geometry model” which refer to the same thing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why is there no comparison to zero-123, NVS-Fusion (which is named GeNVS officially and should also be referred to as GeNVS to avoid confusion) and 3DIM? Is there any reason why the experimental settings of the latter two could not be reproduced s.t. the numbers from the original paper can be used for comparison?
Why did you not just use the architecture from PixelNeRF as geometry module? Please consider adding an ablation that demonstrates the improvements made by the proposed geometry module.
Minor comment:
In paragraph 3.2, F is a feature map (dx32x32) right? In this case, I find it confusing to refer to this feature map as 3D features in paragraph 3.2. I suggest rewording this as it is misleading to the reader.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The discussed limitations are not thorough enough. Please discuss limitations wrt to training/inference speed when using a large diffusion model, e.g. I suspect it takes a very long time to generate a video using this model. Further, the model is not guaranteed to be 3D consistent or identity-preserving and it should be discussed to what extent this is problematic in practice.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort sharing critical feedback regarding our work. We have addressed your points and questions below.
> These methods are also missing from Table 1 and it is not clear why.
Thanks for pointing it out. We would like to add discussions with recent existing works in Table 1 as suggested.
> for the related work section on geometry-based nvs methods and sparse 3d reconstruction: how does the proposed approach differ from the existing approaches and where is it similar?
Following your suggestion, we would like to add following description in the final version:
*While geometry-based methods have advanced in novel view synthesis, their limited capacity to model uncertainties in unseen regions and the lack of a 2D prior often restrict their image synthesis quality and generalization for unseen category objects. In contrast, our approach seeks to meld the strengths of both geometry-based methods and the robust 2D pre-trained diffusion model. This fusion showcases our method's prowess in synthesizing images that are not only geometry-consistent but also of high quality, ultimately leading to enhanced generalization performance on objects in unseen categories.*
> The motivation for designing the geometry module is not entirely clear. Given that the spatial guidance module appears to be very similar to ControlNet, the technical contribution of this work seems to be rather small.
Thank you for highlighting that aspect. Our geometry module's initial design was inspired by the GPNR model [1], which exploits stacked transformers to aggregate feature over multiple context views. GPNR benefits from strong generalization to unseen scenes due to its canonicalized positional encodings of pose information. However, we observed that due to the lack of an explicit 3D constraint, GPNR occasionally exhibited suboptimal performance with certain intricate objects, such as plants. Consequently, we integrated feature volumes into GPNR to bolster its 3D modeling capabilities. Our supplementary response PDF (Table 1 and 2) presents an ablation study of our geometry model. The results clearly show that even with a basic GPNR backbone, our method outperforms SparseFusion, though it falls short of our approach. From this, we deduce two critical insights:
1) **Even when leveraging previous geometry-based methods, our strategy delivers superior outcomes compared to SparseFusion. This underscores the pivotal role the 2D prior plays in novel view synthesis and the importance of our work.**
2) **Our geometry model design can indeed improve the performance of our framework.**
**Regarding the similarity to ControlNet, we would like to clarify that we mainly inspire the spatial feature concept from ControlNet and Plug and Play Diffusion model.** By converting the features aggregated from sparse-view images into spatial feature, our framework successfully enable frozen pre-trained diffusion model to perform novel view synthesis. **Specifically, as far as we know, we are the first one to use spatial feature for novel view synthesis, and achieve the significant better results (50% in FID and 20% in LPIPS better than baselines). **We believe this will inspire subsequent works that apply pre-trained diffusion models into 3D area without fine-tuning pre-trained models or test-time distillation.**
> Why is there no comparison to zero-123, NVS-Fusion and 3DIM? Is there any reason why the experimental settings of the latter two could not be reproduced
**In our setting, we perform novel view synthesis on image with 512 resolution which is more challenging setting than GeNVS and 3DiM (they synthesize images with 128 resolution), so it is unfair to compare with the results reported in their papers, but we would like to compare with them in a fair setting once we have their codes.** Following your suggestion, we tried to compare our work with the concurrent Zero 1-to-3, however, we found that Zero 1-to-3 has strong data assumption: it requires objects to be located at the origin and the placements of the cameras to be pointing towards the origin as a result of its synthetic 3D training data from Objaverse. CO3D, on the other hand, is a dataset of real-life video captures of objects with noisy camera trajectories where each frame does not point at the center of the object. And thus, converting the camera view parameterization for compatibility with Zero 1-to-3 is not straightforward.
Given that we've benchmarked our approach against the recently published SparseFusion (CVPR 2023) and GBT (arxiv 2023) with open-sourced codes, we believe our contributions have been suitably assessed. We also would like to discuss with GeNVS, 3DiM and zero-1-2-3 in our revisions.
> The qualitative evaluation is very sparse,
**Thank you. We indeed show more novel view synthesis samples in our website (Please refer to the link in Abstract and the main paper). The results clearly show that our method synthesise significant better 360 videos than PixelNeRF and SparseFusion on CO3D (without test-time optimization/distillation).** These results also are consistent with our quantitative comparisons (about 50% in FID and 20% in LPIPS better than SparseFusion). We will add these samples into Appendix as suggested and genuinely hope that this response sufficiently addresses and alleviates your performance-related concerns.
> I suspect it takes a very long time to generate a video using this model.
Good point. We focus on novel-view synthesis without the test-time optimization. With our approach, we can take just roughly 2~3 seconds to generate per frame and generate a 360-degree video in just 3-5 minutes on a single A100-40GB GPU. In contrast, methods based on the diffusion model, such as SparseFusion, typically take much longer due to their reliance on test-time optimization. For instance, SparseFusion requires over an hour to produce a similar video.
> writing
We will revise them following your suggestions.
---
Rebuttal Comment 1.1:
Title: Following response for some claims
Comment: >The paper leaves substantial claims unsupported:
L.14 “enabling it (the approach) to generate geometrically consistent images” - there are no qualitative results reported that evaluate the geometric consistency of the results.
Thanks. We show more novel view synthesis samples on our website (Please refer to the link in Abstract and the main paper). The results clearly show that our method synthesizes significantly consistent 360 videos than PixelNeRF and SparseFusion on CO3D (without test-time optimization/distillation). We genuinely hope that this response is sufficient to address and alleviate your performance-related concerns.
>L.7 “2D diffusion models, nevertheless, lack 3D awareness, leading to distorted image synthesis and compromising the identity”. This is never shown in the paper but is a central aspect of the story.
Thank you for your feedback. We've presented two samples ( \lambda=0) in Figure 6 of the main paper to illustrate that the 2D diffusion model, on its own, struggles to synthesize images that maintain consistent geometry and identity. We will add more explanations following your suggestions.
>L.78 “The ability to synthesize high-quality and even scene-level images” for which results are shown on the fire hydrant category of CO3D. While these images contain background these are not scene-level images but still object centric, single category images and these results do not demonstrate the ability to generate entire scenes.
Thanks. We would like to rephrase the task as “Object Centric Scene Level Novel View Synthesis” in the final version following your suggestions. Additionally, we are among the first to successfully synthesize the object jointly with the full background instead of merely the masked object level in the CO3D dataset.
>In paragraph 3.2, F is a feature map (dx32x32) right?
Yes, that is correct. We will clarify the wording following your suggestions.
In summary, we thank the time and efforts the reviewers dedicated to our work, and we would like to revise paper following your suggestions. Also, we believe that our response sufficiently addresses your concerns by providing additional samples to show our results are significantly better than baselines in a fair setting and adding the ablation study about geometry model, and thus respectfully hope you can reconsider your final decision. If you have further concerns, please feel free to respond us and we would like to discuss with you. | Summary: The paper proposes a method for novel view synthesis using diffusion models. Given a sparse set of views, the method extracts per-pixel features for each of the views and reshapes these features to have a depth dimension by splitting the feature channels. Each feature sequence of per-pixel features, along with their depth along the viewing ray direction and positional encoding , is passed to a transformer to produce a per ray feature vector. Next, all feature vectors from multiple-views are aggregated using another transformer layer. To ensure the result is 3D consistent the input rgb pixels are linearly combined based on the aggregation weights and compared with the target image using an mse loss. The resulting features are fed to a spatial guidance module before being fed to a pre-trained stable diffusion model.
The method is compared to state of the art methods and outperforms them on the Co3D dataset.
Strengths: - A solid approach addressing novel view synthesis from sparse views. Especially the way the frozen pre-trained model is used to allow backpropping the gradients through the network not losing generalisability while training the geometry module as well as the spatial guidance module.
- The method was thoroughly evaluated and outperforms the competitor methods quantitatively. Qualitatively, the method performs very strongly despite not being perfectly 3D consistent.
- It is evident that using a pre-trained model helps with generalization of the method. This is demonstrated by the open-set evaluation as well as the scene editing capabilities of the network.
- The method improves with an increasing number of views even when 5 views are used despite only training with a maximum of 4 views during training. This provides evidence of the view-aggregation robustness of the method.
Weaknesses: - While the method is encouraged to produce 3D consistent results due to the color estimated reconstruction loss, there is no actual 3D constraint in the architecture that enforces any 3D consistency. This will give the network an ‘easy way out’ when something ‘wrong’ is easier to explain rather than the actual correct 3D consistent solution. This could possibly encourage hallucinations as seen in the results.
- The method divides the per-pixel features from the ResNet backbone by the number of ‘depth samples’ which results in a feature dimension of 28 per depth sample. It does not make intuitive sense to distribute spatial features along the depth dimension. Can the authors explain their thoughts behind this design choice?
- The authors claim that pixelNeRF wins in terms of PSNR due to blurry results. While I agree that PSNR can be misleading and their results look visually a lot better I disagree that this is due to blurriness. I believe the proposed method hallucinates wrong backgrounds which can be seen on the website where the backgrounds keep changing. It should be addressed more openly in the paper that the method produces sharp results, however, it does not always produce the exact information as seen from the input. It would be interesting to show the full ground truth video spins compared with the resulting spin given N input views.
- The ‘Open-Set Category Results’ on the website do not show input images.
- There are some related works regarding multi-view reconstruction missing (2, 3, 4). See missing references below. Also, the paper title is very similar to an existing relevant work (1) that was not cited. The paper should include this missing reference and provide appropriate credit.
Missing citations:
- [1] Henzler et al., Escaping Plato's Cave: 3D Shape From Adversarial Rendering. In ICCV 2019
- [2] Anciukevicius et al., RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation. In CVPR 2023
- [3] Henzler et al., Unsupervised Learning of 3D Object Categories from Videos in the Wild. In CVPR 2021
- [4] Sitzmann et al., Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations. In NeurIPS 2019
Typos:
- Figure 2: ‘views.In’
- Line 273: Table 3 → Table 4
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How does the method handle background? It seems non-trivial to aggregate multiple views with backgrounds compared to masked objects. Does that network just deal with it? What does the aggregated rgb input look like in these cases?
- The rgb input and spatial guidance module both seem to be very crucial for performance (figure 6 and 8). Both contributions provide 3D / color guidance but they seem to be quite sensitive. Can the authors provide some insights with respect to failure cases?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Both limitations and potential negative societal impact are addressed appropriately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort sharing critical feedback regarding our work. We have addressed your points and questions below.
>While the method is encouraged to produce 3D consistent results due to the color estimated reconstruction loss, there is no actual 3D constraint in the architecture that enforces any 3D consistency. This will give the network an ‘easy way out’ when something ‘wrong’ is easier to explain rather than the actual correct 3D consistent solution. This could possibly encourage hallucinations as seen in the results.
This is a good point. The feature utilized for color estimation is constructed from volumetric modeling of features, making it 3D-
aware and ensuring 3D consistency. Our paper illustrates how this can enhance images synthesized by the diffusion model. Nonetheless, as you've highlighted, there remains a possibility of encountering failure cases. We believe these can be ameliorated by integrating stronger geometry cues like depth or point cloud estimation models.
> The method divides the per-pixel features from the ResNet backbone by the number of ‘depth samples’ which results in a feature dimension of 28 per depth sample. It does not make intuitive sense to distribute spatial features along the depth dimension. Can the authors explain their thoughts behind this design choice?
The core reasoning behind reshaping the pixel-aligned feature map from the ResNet into a feature volume and trilinear sampling features is to infuse stronger 3D inductive bias into the feature representations of the objects/scenes. Bilinear sampling pixel features from a 2D feature map can suffer from ambiguities in 2D images such as resolving occlusions. By learning to encode volumetric features enforces the representations to be more 3D aware, leading to better reconstruction capabilities.
> The authors claim that pixelNeRF wins in terms of PSNR due to blurry results. While I agree that PSNR can be misleading and their results look visually a lot better I disagree that this is due to blurriness. I believe the proposed method hallucinates wrong backgrounds which can be seen on the website where the backgrounds keep changing. It should be addressed more openly in the paper that the method produces sharp results, however, it does not always produce the exact information as seen from the input. It would be interesting to show the full ground truth video spins compared with the resulting spin given N input views.
Thank you for your discussion. We agree that the hallucination of background is another factor to impact the PSNR score but we want to respectfully clarify that the blurriness is the main factor. To verify this, we further report the results of color estimation from the geometry model (normally are more blurry than output from diffusion model) in Table 1-3 in our Appendix. The results clearly demonstrate that the output from our geometry model can achieve better PSNR than all baselines.
Further, we show the synthesized videos with more input views on our website following your suggestions. The results clearly show that more input views can alleviate the background hallucination problem as expected.
> The ‘Open-Set Category Results’ on the website do not show input images
We have updated the website to reflect the input images for better visualization of novel view synthesis capabilities from the single view.
> There are some related works regarding multi-view reconstruction missing
Thank you for pointing out the missing citations and typos in the writing. We will make sure to revise the draft following your suggestions.
> How does the method handle background? It seems non-trivial to aggregate multiple views with backgrounds compared to masked objects. Does that network just deal with it? What does the aggregated rgb input look like in these cases?
Sorry for the confusing description. We do not design the specific algorithm to model the background, and we simply discard the foreground object mask during preprocessing to preserve the entirety of the scene including the background. Afterwards, the image goes through the same pipeline as the object-only scenario. The results demonstrate that the frozen pre-trained model can synthesize novel views of objects jointly with the full background scene with better quality than compared with models trained from scratch.
> The rgb input and spatial guidance module both seem to be very crucial for performance (figure 6 and 8). Both contributions provide 3D / color guidance but they seem to be quite sensitive. Can the authors provide some insights with respect to failure cases?
By initializing the input noise for the diffusion model with rgb color output perturbed with noise instead of random Gaussian noise helps the generated sample to preserve components of the rgb image such as the structure and color. As the rgb output is a coarse and blurry estimate from the geometry prior, perturbing it with only a few timesteps leads to the generation of images not fully refined and sharp. We find that increasing the number of perturbation timesteps virtually endows the diffusion model to make more edits during the reverse diffusion process and is crucial for high quality image synthesis.
Spatial guidance similarly facilitates the preservation of spatial structure and semantics of the generated output. Lower the spatial guidance weight, the denoising process has to solely rely on the noise perturbed input rgb. Large spatial guidance weights, however, decrease the generated image quality by undermining the base diffusion model’s feature maps and thereby suppressing image diversity(similar to having high weights for classifier-free guidance). Through empirical trials, we found noise perturbation of 20 timesteps and a spatial guidance weighting of 2.0 to generate the best quality images with the highest fidelity.
---
Rebuttal Comment 1.1:
Title: Disagreement on claims.
Comment: 1. "ensuring 3D consistency" --> I do not agree with this and am fairly convinced that the method does not ensure 3D consistency but 'encourages 3D consistency'. The transformer network is not constrained to learn 3D consistent outputs.
2. "infuse stronger 3D inductive bias" --> I do not see how reshaping 2D features would infuse any 3D inductive bias. Would be great to see an ablation for this.
---
Reply to Comment 1.1.1:
Title: Following Response to 'Disagreement on claims.'
Comment: Thank you for getting back to us and raising helpful and critical discussions! We are glad to have addressed some of your concerns in our response. We further provide below, the responses to your additional points.
> "ensuring 3D consistency" --> I do not agree with this and am fairly convinced that the method does not ensure 3D consistency but 'encourages 3D consistency'. The transformer network is not constrained to learn 3D consistent outputs.
Thank you for raising this concern. Following your suggestion, we would like to correct our claim to “encouraging 3D consistency.” We agree that while color reconstruction loss encourages 3D consistency through enforcing color consistency across 3D viewpoints, it does not ensure 3D consistency as no actual 3D constraints are in place.
Furthermore, to verify the learning of geometry, we attempted to visualize depth map estimates of the synthesized novel views by computing an expectation over the depths of the points along each ray. Specifically, the expectation probabilities were directly obtained from the normalized soft-attention weights of our pre-trained transformer networks.
We further updated the results on our webpage, and the resulting depth estimates indicate that the network does indeed learn proper geometry with respect to 3D rather than finding an easy way out. Nevertheless, we agree that “encouraging 3D consistency” is a more appropriate claim.
> "infuse stronger 3D inductive bias" --> I do not see how reshaping 2D features would infuse any 3D inductive bias. Would be great to see an ablation for this.
Sorry for the confusion arising from unclear explanations.
The original feature map from the ResNet has a resolution of c×h×w, where c is the number of channels, and h and w represent the height and width, respectively. We transform this feature map into a volume feature with a resolution of c′×d×h×w, where h and w remain unchanged, d represents the depth, and c=c′×d.
Given a 3D point, we project it onto the volume feature where near and far planes are mapped to the depth dimension of the volume. We then tri-linearly interpolate features from this volume feature and pass them through two additional transformers: one is employed to calculate weighted epipolar features, and the other is designed to aggregate information across views. Our goal with these steps is to derive spatial features.
Given that both the additional transformers and the ResNet are trainable components of our architecture, certain channels(specifically those where d = c / c’) of the feature map generated by ResNet are encouraged to learn depth/occupancy cues when trained on a multiview dataset.
As displayed on our webpage, without any further training, our pre-trained transformer networks—using features from the fine-tuned ResNet as input—are capable of estimating the depth map of a novel view image through the utilization of normalized soft-attention weights. This outcome provides clear evidence that our approach can help to learn volumetric information, validating the effectiveness of our approach.
We hope this clarification resolves the confusion and provides a clearer picture of our method and its capabilities. Please let us know if you have further questions or need additional details.
In summary, we sincerely appreciate your discussions and suggestions for our paper, as they can significantly improve the quality of our work. We will revise our final version to improve the clarity following your suggestions. | Summary: The paper presents a method for novel view synthesis given sparse image observations of a scene. A diffusion model is used to generate the novel views. A 3D structured conditioning input is first computed, and a pretrained 2D diffusion model is fine-tuned to take this input and compute the novel view renderings. Using a pretrained 2D diffusion model enables better generalization to out-of-distribution object categories and further enables text-based editing.
Strengths: The paper presents a good overview of the existing literature. The idea of fine-tuning large 2D diffusion models for novel view synthesis is a good one, and has been explored concurrently as well. The quantitative results demonstrate that the approach outperforms sparsefusion. The editing results are promising.
Weaknesses: The main limitation is that the results are not view-consistent. The identity and appearance of the hydrant and the background changes with the viewpoint. This limitation is not shared with pixelNeRF or sparsefusion. This is a severe limitation, as the approach does not let us observe any single 3D consistent scene.
The paper lacks technical novelty. The approach combines the insights from sparsefusion for 3D reasoning, and replaces their diffusion model trained from scratch with a fine-tuning of an existing 2D diffusion model. The paper is honestly written - the introduction mentions this. However, with limited technical novelty, the expectation would be that the introduced change would lead to significant practical benefits. This does not happen; as unlike sparsefusion, the presented approach does not enable reconstructing 3D consistent scenes. The noise perturbation step further exacerbates this problem. This part of the method independently processes the different viewpoints.
While some generalization to unseen categories is shown, one would expect much stronger generalization when fine-tuning large models. Why is it not possible to reconstruct non-hydrant scenes?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is it possible to perform 3D fusion using score distillation, like in sparsefusion?
What limits the method to work on more complex scenes? Does the method generalize to non-hydrant scenes?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: Some limitations are mentioned, however, some others I mentioned in weaknesses are not present.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort sharing critical feedback regarding our work. We provide the following response to address your concerns about results limitation and novelty . and thus respectfully hope you can consider the response to the final decision.
> The paper lacks technical novelty.
We would like to respectfully clarify that the differences between our work and SparseFusion are not merely the replacement of the backbone. Our approach goes beyond that as follows:
**1) Different approach to enable 2D diffusion model for 3D novel view synthesis:** Unlike SparseFusion training the diffusion model from scratch, how to control the output of a pre-trained diffusion is very challenging because of hallucination problem. Our approach further introduce a spatial guidance model that transforms these features into spatial features for guiding frozen pre-trained diffusion models synthesizing geometry consistent novel view images. **This approach experimentally avoids the need to fine-tune the diffusion model and test-time distillation.**
**2) Use of a different 3D geometry modeling method:** In modeling geometry, we further learn to encode a volumetric feature representation and then trilinear sample features as opposed to bilinearly sampling pixel features from a 2D feature map as in SparseFusion. This helps enforce stronger 3D inductive biases into the feature representation of the objects and scenes and facilitates improved reconstruction capacity. We further show the ablation study on modeling feature volumes in the response PDF, and the results clearly show that even with a basic GPNR backbone (used in SparseFusion), our method outperforms SparseFusion, but it falls short of our final approach.
**3) Different way of inferencing:** Unlike most baselines (SparseFusion, DreamFusion or 3DFuse) which require time-consuming optimization per object at inference time, we focus on test-time training-free novel view synthesis in our paper, making deployment more straightforward. For a more detailed evaluation, we further show novel synthesis results in our website (as referenced in the Abstract and the main paper). Specifically, without test-time optimization or distillation, our method produces noticeably superior 360-degree videos compared to both PixelNeRF and SparseFusion in the Hydrant-Scene category. These results align seamlessly with our quantitative analyses.
**In summary, we believe our work can bring different contributions from SparseFusion for the community. Our work is centered on how to utilize 2D prior from a frozen image diffusion model without test-time distillation, and how it can significantly bolster the generalization and efficacy of novel view synthesis.** Our results further unequivocally showcase the advantages of utilizing the 2D prior over the baseline methods. Specifically, we achieve approximately a 50% improvement in FID and a 20% enhancement in LPIPS compared to SparseFusion. We believe our approach to utilizing the 2D prior for novel view synthesis can inspire more subsequent works (especially for open-set novel view synthesis without test time per-object distillation).
> The main limitation is that the results are not view-consistent.
Thank you for your comment. **We'd like to respectfully clarify that the inconsistency is not due to the limited performance of our method. Instead, it arises because our approach targets a more challenging yet practical task, i.e., performing novel view synthesis on unseen objects without test-time optimization/distillation.** This would significantly promote the convenience during the deployment.
Further, novel-view synthesis on CO3D image without test-time optimization/distillation is very challenging, **we further show the novel view synthesis results (without test-time optimization/distillation) of PixNeRF and SparseFusion in our website (Please refer to the link in Abstract and the main paper). The results clearly show that our method synthesizes significantly better 360 videos than PixelNeRF and SparseFusion on Hydrant-Scene category (without test-time optimization/distillation).** We genuinely hope that this response sufficiently addresses and alleviates your performance-related concerns.
> While some generalization to unseen categories is shown, one would expect much stronger generalization when fine-tuning large models. Why is it not possible to reconstruct non-hydrant scenes?
Thank you for the question. **We'd like to respectfully clarify that our paper does not fine-tune existing 2D diffusion models, but instead design a framework to enable 2D frozen pre-trained diffusion models to perform novel-view synthesis without the need for time-consuming per-object test time distillation.** Without fine-tuning pre-trained diffusion model, our method can still achieve approximately a 50% improvement in FID and a 20% enhancement in LPIPS compared to SparseFusion. We believe this significant improvement is able to demonstrate the benefits of enabling pre-trained models into novel view synthesis, even without test-time distillation.
On the other hand, we would like to clarify that joint object/background scene novel view synthesis itself is very challenging, even if for many recent works(e.g. CC3D (ICCV2023) and SceneDreamer (2023)), they normally need additional conditions or per-scene optimization, so it is very challenging for our work to generalize to other scenes without specific designs. In order to demonstrate the generalization ability of our method in single category scene novel view synthesis, **we further train our framework on other object-centric scenes and show the results in our webpage. (Please refer to the link in Abstract and main paper).**
> Is it possible to perform 3D fusion using score distillation?
Yes, following your suggestion we show our samples with distillation in the webpage. (Please refer to the link in Abstract and main paper). Our method show consistent results as expected.
---
Rebuttal Comment 1.1:
Title: Following Response
Comment: In summary, we thank the time and efforts the reviewers dedicated to our work. We also provided additional samples to show our results are significantly better than baselines in a fair setting and elucidate the differences between our work and prior studies. **We believe our response can sufficiently addresses your concerns and clarifies misunderstandings regarding the perceived limitations in performance and novelty, and thus respectively hope you can reconsider your final decision. If you have further concerns, please feel free to respond to us and we would like to discuss with you.** | Rebuttal 1:
Rebuttal: Thanks for the time and effort sharing critical feedback of every reviewer regarding our work. o address the questions and points raised by the reviewers, we have provided additional experimental results in our response PDF.
1) In the hydrant-scene dataset, we've plotted a figure (Figure 1 in the response PDF) that demonstrates the consistent improvement in our method's performance over training time.
2) Our ablation study on the geometry module shows how our design enhances the overall performance. Notably, even in the absence of the new geometry model design, our framework outperforms the baselines.
3) We also provde addition qualitative results in our response PDF, which include both input images and images synthesized by our method.
4) **For more samples, please refer to the website linked in the abstract and the main paper.**
a) Comparisons of synthesized 360-degree videos. **The results clearly show that our method can synthesise more consistent samples than baselines without test-time distillation.**
b) Additional object-centric scene level 360-degree videos.
c) Results with test-time distillation, highlighting improved consistency.
d) Hydrant Scene Novel View Synthesis results with 5 context views.
In summary, we believe our work can bring different contributions from previous works for the community. Our work is centered on
1) how to utilize 2D prior from a frozen image diffusion model without test-time distillation.
2) how 2D prior from a **frozen** image diffusion can significantly bolster the generalization and efficacy of novel view synthesis.
Our results further unequivocally showcase the advantages of utilizing the 2D prior over the baseline methods. **Specifically, we achieve approximately a 50% improvement in FID and a 20% enhancement in LPIPS compared to open-sourced SparseFusion (CVPR 2023).** We believe our approach to utilizing the 2D prior for novel view synthesis can inspire more subsequent works (especially for open-set novel view synthesis without test time per-object distillation).
**We also provide specific responses to address concerns from each reviewer. We believe our detailed response addresses concerns about our method's performance and novelty. We kindly hope that you can consider our response in your final decision. If there are additional concerns or questions, we are happy to further discussion and would appreciate your feedback.**
Warm regards,
The Authors of Submission 12217
Pdf: /pdf/51f718f820ff60d657a24755cbc8d46880fb29b9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper aims to leverage large-scale 2D diffusion models as priors to improve the task of novel view synthesis in the setting of sparse input views. The paper proposes an approach consisting of two stages: a first 3D-aware stage in which features from context views are aggregated in an encoded feature volume (which is differentiably rendered from the target view), and a second 2D stage in which a conditional image diffusion model produces the final output image. In the first stage, the process of aggregating, processing, and rendering features is similar to PixelNeRF/NeRFormer. The output of this stage (i.e. the RGB render) is perturbed by noise before being inputted to the conditional diffusion model in the second stage. The conditional diffusion model employed by the paper is Stable Diffusion equipped with ControlNet for conditioning on the input features (S3.2).
Strengths: ### The paper is well-structured.
* The introduction is well-written and motivates the problem well. The methods section breaks down the contribution into provides helpful preliminaries (S3.1) in a good amount of detail. It also describes the contribution (S3.2-3.4) precisely without unnecessary complications.
### The paper conducts ablations on different aspects of its architecture
* Section 4.2 is very helpful for the reader, as it contains an analysis of how different aspects of the network contribute to its performance.
### The zero-shot transfer experiments are nice.
* It is good to see that the method trained on PASCAL transfers adequately to COCO.
### The supplementary information includes code.
* I looked through the code briefly and it looks quite clear. (I did not attempt to run it.) It includes configs which are easy to refer to while reading the paper.
Weaknesses: ### Comparison with other methods
* This field is moving very quickly, so it is obviously not reasonable to expect the paper to compare to all other recent methods. However, it would be good to discuss and compare to at least some works beyond PixelNeRF and SparseFusion. For example, some subset of: 3DiM, Triplane Diffusion, Rodin, HoloDiffusion, NerfDiff, GeNVS. There is also the line of work considering distillation of pre-trained models, which is relevant because you use Stable Diffusion, such as: NeuralLift-360 (Xu et al. 2022), NerDI (Deng et al. 2022), RealFusion (Melas-Kyriazi et al. 2023), Dreambooth-3D (Raj et al. 2023), Make-It-3D (Tang et al. 2023), Zero-1-to-3D (Liu et al. 2023).
* Some of these are mentioned in the introduction and include results on the same dataset as you (e.g. CO3D hydrants and ShapeNet Cars), but they are not included in the tables.
* To be clear, I would not expect the paper to compare to a large fraction of these methods, but I would expect the paper to compare to at least a few methods apart from PixelNeRF.
### Performance on ShapeNet Cars
* Along the lines of the comment above, the ShapeNet Cars table (Table 4 in the supplementary) shows relatively weak performance, but the table does not include many methods (including very old methods). For example, {SRN, CodeNeRF, FE-NVS, VisionNeRF, 3DiM} all achieve better PSNR/LPIPS than the proposed model.
### Compute/Memory Utilization
* Most 3D-aware generative models tend to be compute-intensive (during training). It would be interesting to know how your model compares with others on these aspects, and how varying training time/data impacts results.
### [Minor] LaTeX Spacing
* It looks like some v-space tricks were applied (L181-182, L186-187, vertical padding on Table 2 and Table 3, all of Section 4.5). It’d be preferable to remove these.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: ### Modeling the background
* You mention in Section 4.1 that you train on the hydrant category “incorporating the full background.” How exactly do you model the background? Did you consider any alternative approaches for this aspect of the problem?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The limitations are mentioned clearly at the end of the paper. This section suggests that they can be solved by “stronger geometry backbone and train[ing] it on larger datasets”. The question of whether or not this is all that is needed merits a longer discussion, but of course I understand that there are space limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort sharing critical feedback regarding our work. We have addressed your points and questions about performance comparison and training costs below.
> Comparison with other methods
We agree with you and would like to open a discussion on concurrently proposed 3D novel view synthesis works, and respectively highlight that our method contributes different aspects compared to the majority of these.
We would like to first clarify that our paper primarily contributes by 1) designing a framework to enable 2D frozen pre-trained diffusion model to perform novel-view synthesis without the need for time-consuming per-object test time distillation. 2) demonstrating that the utilization of a 2D prior in a pre-trained diffusion model can enhance the generalization capabilities of novel-view synthesis for unseen and open-set category objects.
In order to substantiate our contributions, we would like to compare with concurrently proposed 3D novel view synthesis works, but considered three factors for baselines for a fair comparison:
**1) the availability of open-source codebase** (but 3DiM and GeNVS do not provide, so we cannot compare with them fairly because we synthesize images with different resolutions),
**2) methods that don't require laborious test-time optimization for each object** (a criterion that Nerdi, Dreambooth-3D, NeuralLift-360, RealFusion, and Make-It-3D do not fulfill),
**3) image-conditional methods that can operate under sparse observations (1~3),**(Triplane Diffusion and HoloDiffusion are not designed for this).
**4) Following your suggestion, we tried to compare our work with the concurrent Zero 1-to-3, however, we found that Zero 1-to-3 has strong data assumption:** it requires objects to be located at the origin and the placements of the cameras to be pointing towards the origin as a result of its synthetic 3D training data from Objaverse. CO3D, on the other hand, is a dataset of real-life video captures of objects with noisy camera trajectories where each frame does not ideally point at the center of the object. And thus, converting the camera view parameterization for compatibility with Zero 1-to-3 is not straightforward.
Given that we've benchmarked our approach against the recently published SparseFusion (CVPR 2023) and GBT (arxiv 2023), which provides open-sourced codes, we believe our contributions have been suitably assessed in our setting. In this regard, our contributions remain unaffected regardless of comparisons with these baselines for other settings, and we would like to incorporate the thorough discussions with them into our revisions.
> Performance on ShapeNet Cars
This is a good point. This is mainly due to the following reasons. 1) PSNR results is low because of the sharpness of the results from diffusion model. In our response PDF, we further add the results generated from our geometry model, we can see the PSNR score improves by 10%. 2) In our setting, we perform novel view synthesis on images with 512 resolution which is more challenging than the baselines you mentioned (they focus on 128 resolution). 3) As described above, we focus on novel view synthesis without test time per-object optimization, a more challenging but practical setting for real-world deployment while others do not.
> Compute/memory utilization
Most 3D-aware generative models tend to be compute-intensive (during training). It would be interesting to know how your model compares with others on these aspects, and how varying training time/data impacts results.
Our primary motivation is to enable a frozen 2D diffusion model to conduct novel view synthesis without the need for test-time optimization, so we only train the geometry module and the spatial guidance model within our framework, eliminating the need to fine-tune the entire diffusion model. This approach significantly improves our training efficiency. To provide a specific comparison, our framework requires only 3 days of training on eight A100-40GB GPUs, whereas the concurrent work, GeNVS, demands 7 days.
Further, we provide the Figure about the relationship between training time and results in the PDF of response. The Figure clearly demonstrate xxx.
> Background Modeling
You mention in Section 4.1 that you train on the hydrant category “incorporating the full background.” How exactly do you model the background? Did you consider any alternative approaches for this aspect of the problem?
Sorry for the confusing description. We do not design specific algorithms to model the background, and simply discard the foreground object mask during preprocessing to preserve the entirety of the scene including the background. Afterwards, the image goes through the same pipeline as the object-only scenario. The results demonstrate that the frozen pre-trained model can synthesize better novel view quality both object and scene level novel images when compared with models trained from scratch.
In our paper, we focus on enabling the frozen 2D diffusion model to synthesize novel view without per-object test time optimization, and show its generalization ability on unseen category samples. In the future, we would like to add more conditions, e.g. segmentation map, for synthesizing more complex scene-level novel view synthesis.
> v-space tricks:
Thank you for mentioning. We will remove them in the final version following your suggestions.
---
Rebuttal Comment 1.1:
Title: Sorry for the minor errors
Comment: We apologize for uploading an incorrect version of the response that contained one incomplete response. Here's the corrected sentence:
>Compute/memory utilization Most 3D-aware generative models tend to be compute-intensive (during training). It would be interesting to know how your model compares with others on these aspects, and how varying training time/data impacts results.
Our primary motivation is to enable a frozen 2D diffusion model to conduct novel view synthesis without the need for test-time optimization, **so we only train the geometry module and the spatial guidance model within our framework, eliminating the need to fine-tune the entire diffusion model.** This approach significantly improves our training efficiency. **To provide a specific comparison, our framework requires only 3 days of training on 8 A100-40GB GPUs, whereas the concurrent work, GeNVS, demands 7 days.**
Further, we provide the Figure about the relationship between training time and results in the PDF of response. This figure clearly shows that as training steps increases, the performance of our method steadily improves.
In summary, we thank the time and efforts the reviewers dedicated to our work, and we would like to revise paper following your suggestions. If you have further concerns, please feel free to respond us and we would like to discuss with you.
---
Reply to Comment 1.1.1:
Title: Additional experimental results
Comment: Thanks for your time and effort sharing feedback regarding our work. Following your suggestions, we perform additional comparisons with concurrent works:
> Comparisons with GeNVS and 3DiM:
To ensure a fair comparison with concurrent work GeNVS and 3DiM, which train diffusion models at a resolution of 128x128, we adapted our method to match this resolution for the intermediate feature map produced by our geometry module (original is 32 x 32 ). Nonetheless, our final image synthesis is at a higher resolution of 512x512. In contrast, both GeNVS and 3DiM only generate images at the 128x128 resolution. The detailed results are provided below:
| | PSNR | SSIM | LPIPS |
|------------------------|-------|------|-------|
| 3DiM | 21.01 | 0.57 | — |
| GeNVS (autoregressive) | 20.6 | 0.89 | 0.12 |
| Ours | **21.31** | **0.89** | **0.12** |
ShapeNet Cars
| | PSNR | SSIM | LPIPS |
|-------|-------|------|-------|
| GeNVS | 15.48 | 0.27 | 0.37 |
| Ours | **16.42** | **0.33** | 0.46 |
CO3D Hydrant Category (object + background)
From the results, it's evident that our method, **despite synthesizing images at a much higher resolution compared to the baselines (512 vs 128), still manages to outperform in certain metrics.** This is particularly evident in metrics such as PSNR and SSIM for hydrant scenes, as well as in the PSNR, SSIM, and LPIPS scores overall. Another significant advantage of our framework is its efficiency. It only necessitates 3 days of training on eight A100-40GB GPUs, while the competing method, GeNVS, takes twice as long, requiring a full 7 days.
> Can you elaborate more on why converting camera poses is not straightforward? Couldn't you just use the relative poses provided by CO3D?
The reasons why evaluating Zero 1-2-3 on CO3D dataset is not straightforward are listed as follows:
1. The location of a camera in Zero-1-2-3 is uniquely defined in a spherical coordinate system, which holds only under the assumption that the camera is always pointed at the center. Consequently, Zero 1-to-3 only parametrizes the relative camera pose by concatenating the change in polar angle, azimuth angle, and the radius(distance from the center) with respect to the given input view. However, cameras in CO3D are often pointing at different centers making the accurate calculation of relative polar and azimuth angles and radius with respect to a common center challenging.
2. All training assets in Zero 1-to-3 were normalized to fit within a unit cube with the camera distances from the center uniformly sampled in the interval [1.5, 2.2]. However, this is not strictly followed by CO3D cameras even after recentering and rescaling the scene.
For better visualization of the difference in distribution of camera poses across the CO3D and Objaverse (as used by Zero 1-to-3) datasets, we have uploaded plots of the camera trajectories for several samples onto our project webpage.
Regardless, to the best of our efforts, we tried to compute the relative CO3D poses that best estimate the parametrization of Zero 1-to-3 through recentering and rescaling of the poses and carried out novel view synthesis. **We have updated the project webpage with the results. It can be seen that Zero 1-to-3 struggles to preserve identity and consistency without test-time distillation, often deforming the input object.**
In summary, we sincerely appreciate your suggestions for our paper, as they can significantly improve the quality of our work, and **we already perform additional experiments to compare with concurrent works in a more fair setting following your suggestions, and the results clearly demonstrate our superior performance over concurrent works even if we synthesise images with higher resolutions.
We hope our responses sufficiently address your concerns regarding the results comparisons. And thus we respectively hope you can reconsider your final decision. If you have further concerns, please feel free to respond to us and we would be happy to discuss with you.** | null | null | null | null | null | null |
DiffPack: A Torsional Diffusion Model for Autoregressive Protein Side-Chain Packing | Accept (poster) | Summary: The paper proposes DiffPack, a torsional diffusion model that accurately predicts the conformation of protein side-chains given their backbones. DiffPack learns the joint distribution of side-chain torsional angles by diffusing and denoising on the torsional space. To avoid issues arising from simultaneous perturbation of all four torsional angles, the paper proposes autoregressively generating the four torsional angles and training diffusion models for each torsional angle. The method achieves remakrable improvements in angle accuracy on benchmark datasets and enhances side-chain predictions in the AlphaFold2 model.
Strengths: -The paper is well-written and easy to follow, with a clear description of the proposed DiffPack model. While the idea of using torsional diffusion is not entirely novel, the paper builds on the work of Jing et al. at NeurIPS 2022 on small molecule torsion diffusion and applies it to the problem of protein side-chain packing.
-The paper's technical contributions lie in the development of a torsional diffusion model that considers the restrictions imposed by covalent bond lengths and angles, and the autoregressive generation of torsional angles.
-The paper's results demonstrate the potential of DiffPack in advancing protein structure prediction and design.
Weaknesses: -Is there a metric available for assessing the overall conformational plausibility of the generated structures? This question is relevant because in the context of small molecule conformation generation, deep learning methods have been shown to produce conformations with lower overall RMSD, but there are still significant challenges with conformational plausibility (e.g. generated benzene rings are not necessarily planar). t would be beneficial to have a metric that takes into account both the geometric accuracy and conformational plausibility of the generated structures to ensure that they are reliable for downstream applications.
-It would be helpful to include a comparison of model run times between DiffPack and other methods such as Attenpack and force field-based methods. This would enable us to better assess the potential gaps in downstream applications when making proteomic predictions with these methods.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see the above weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors didn't discuss the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your suggestions! Here is our response to your concern.
___
**Q1: Is there a metric available for assessing the overall conformational plausibility of the generated structures?**
This is an interesting question! Assessing the overall conformational plausibility of generated structures, particularly in complex biological systems like proteins, is a multifaceted task. Researchers usually employ two kinds of factors to evaluate the plausibility of a conformation.
1. **Steric Clash Assessment**: Steric clash is the most direct way for assessing the plausibility of generated conformation. Conformation which violates chemical constraints will usually impose unnatural overlap between atoms in 3D space.
2. **Physical Energy Assessment**: The total potential energy of a conformation can also be used as a proxy for its plausibility. Conformations that are at or near a local minimum of the energy landscape are typically more plausible.
We choose steric clash assessment to evaluate the plausibility of a conformation in our work. This measurement does not rely on manually selected energy functions, making it a more straightforward choice. Specifically, the distance threshold between different atoms are first predefined according to van der Waals radii, and further adjusted to account for different chemical interaction(e.g. H-bond and disulfide bridges). We classify the conformation as a steric clash if the pair distance between atoms violates this threshold. Details can be found in Appendix B.
Furthermore, it's essential to highlight that our proposed method, which operates in torsion space, has distinct advantages over models operating in Cartesian space, such as AttnPacker. By treating the functional group as a whole in torsion space, intra-group implausibilities (e.g. non-planar generated benzene rings) are unlikely, leaving only inter-group implausibilities. This inherent characteristic of our approach emphasizes its superiority in ensuring conformational plausibility.
___
**Q2: It would be helpful to include a comparison of model run times between DiffPack and other methods such as Attenpack and force field-based methods.**
Thanks for your valuable suggestions! We recognize the importance of comparing inference times between our method and other established approaches, and we've included a detailed comparison in global rebuttal block’s attachment. A summary of the experiment results is presented below:
| Methods | Inference Time (s) ↓ | Angle Accuracy ↑ |
|---|:---:|---|
| SCWRL | 2.71 | 56.2% |
| FASPR | **0.10** | 56.4% |
| RosettaPacker | 217.80 | 58.6% |
| DLPacker* | 28.50 | 58.8% |
| AttnPacker* | 6.33 | 62.1% |
| DiffPack_vanila* | 1.29 | **67.0%** |
As evident from the table, DiffPack_vanila outperforms other GPU-based deep learning methods (denoted by *) in terms of speed while achieving the highest angle accuracy. Although the CPU-based method FASPR is quicker due to efficient optimizations in the tree search process, its prediction accuracy lags behind. In the sidechain packing task, where prediction accuracy is prioritized over inference time, DiffPack's performance becomes particularly significant.
This comparison highlights DiffPack’'s balance between computational efficiency and prediction accuracy, showcasing its viability and strength as a choice for this specific application. The comprehensive analysis can be found in the global rebuttal block for further insights. We will also include it in the revised version of the paper.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thank you for the response. It well addressed my concerns.
---
Reply to Comment 1.1.1:
Comment: We are very glad that our responses is able to address your concerns. Thank you again for your time and patience!
Title: Thanks | Summary: The paper proposes DiffPack a torsional diffusion model to learn side chain placements. In particular, the authors presents a few modification to vanilla torsional diffusion models that improve the empirical results obtaining strong empirical performance.
Strengths: The authors propose a number of modifications to the vanilla application of torsional diffusion to side chain predictions and these have enables them to obtain state-of-the-art performance on this important task.
Weaknesses: Although the empirical results are very good, not many parts of the method are significantly novel. Moreover, among the few modifications that the authors made the presentation of them with respect to the rest of the field and the justification should be improved:
1. Annealed temperature sampling: the low temperature sampling procedure derived from the assumption of the Gaussian distribution seems to be very similar to the one presented in Ingraham et a. (2022) [20]. The authors should include the reference of this technique in the relevant section and clarify the difference between the methods. Moreover, I believe the statement “ideally the sampling process converges to the global optimum when T→0” is wrong or at least misleading (does not specify what the ideal situation is).
2. Multi-round sampling: this is related to the mixing Langevin steps that multiple papers have proposed before (e.g. Ingraham et a. [20]). However, unlike in those approaches, the authors do not limit themselves to a Langevin step that preserves the expected distribution but instead keep the “ODE-term” making therefore very unclear what the process is “theoretically” doing in this process. The authors should provide further discussion of these challenges.
3. Autoregressive diffusion: some of the comparisons (excluding the inference performance) used to motivate autoregressive diffusion versus vanilla are misleading:
- Figure 7: presents the number of steric clashes during the diffusion process not of the generated structures. This is misleading because these steric clashes may not matter or even help the model to detect them to avoid them. As the autoregressive method (that removes following atoms) artificially removes these steric clashes and this may prevent it to reason about them when generating the distributions. The number of steric clashes should be compared based in the resulting generated structures.
- Figure 5 compares the loss values of different methods however these are not comparable. E.g. one of the reason for the loss of X1 is lower than X4 might just be fact that the entropy of the distributions of X1 (which is correlated to the lower bound of the score matching loss) is lower than that of X4. Similarly this makes the comparisons of the losses of the different methods not comparable (e.g. the entropy of the conditional distribution obtained by the autoregressive is lower than that of the unconditional obtained by the joint loss).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The statement “We focus exclusively on protein side-chain prediction under the assumption of a fixed and highly accurate backbone” seems to contradict with the AlphaFold2 results presented in the main text.
“We extend DiffPack to accommodate non-native backbones generated from AlphaFold2”. How is this extension done? What is the model trained on?
Section B.1 what is the distance threshold?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: In some of the tables the (e.g. Table 3) the bolding of the best number is wrong and biased to the proposed method.
I believe that the discussion on the number of parameters is misleading as this is not a very useful measure of computational feasibility. Instead the authors should compare the methods by runtime (and potentially memory cost).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful questions. We'd like to first clarify that our work's main focus is on formulating a new approach to the sidechain packing task, rather than proposing general improvements to the diffusion process. Below are our specific responses to your queries.
___
**Q1: Justification about annealed temperature sampling**
The annealed temperature sampling used in our work indeed has similarities to Ingraham et al. (2022) [28]. While the core concept is borrowed from this previous work, we specifically derived the VE-SDE version of the annealed weight under the same assumption. Although initially referenced in Appendix E, we understand the explanation may have been unclear. Therefore, we'll enhance the relevant section in the main text for better clarity, in line with your valuable suggestion.
Regarding the statement, “ideally the sampling process converges to the global optimum when T→0”, we understand how this can be misconstrued, and we appreciate your attention to the detail. What we intended to convey is that theoretically, the distribution $p_T(\mathbf{x})$ would collapse to a Dirac delta function at the global optimum as **temperature $T$ approaches zero**. However, this is only an approximation in practical applications. We will rewrite this section to eliminate any ambiguity.
Thanks again for your valuable suggestions.
___
**Q2: Justification about multi-round sampling**
Thanks for pointing this out! Upon a closer review of the literature[1], we found that the multi-round sampling is indeed closely connected to langevin dynamics.
Since we utilize VE-SDE which neglect the drift term, the reverse VE-SDE differs from mixing langevin dynamic[1] by only a coefficient $\sqrt{2}$. This coefficient may be interpreted as a temperature scaling factor, and we hypothesize that it contributes to the improvement in the original multi-round sampling.
But as you mentioned, the proposed multi-round sampling is indeed a special case of the general mixing langevin dynamics. In light of this, we further conducted a new experiment with the more general mixing langevin dynamics, observing improvements in key metrics, such as an increase in Angle Accuracy from 69.5% to 70.1% in CASP13. These new findings will be incorporated into the revised version of our paper.
Your feedback has provided invaluable guidance, enabling us to refine our method and elucidate the underlying mechanisms. We are sincerely grateful for your contributions to the advancement of this work. Thank you again!
[1] Song, Yang, et al. "Score-based generative modeling through stochastic differential equations." ICLR 2020.
___
**Q3: The number of steric clashes during the diffusion process is misleading.**
Thank you for raising this concern! We appreciate your insights and would like to clarify that Figure 7 primarily focuses on the training phase rather than the inference process. While it’s true that the steric clashes might be managed by a well-trained model during inference, it is important to emphasize that they pose significant challenges during the training phase. Autoregressive diffusion is designed to alleviate these training difficulties, which might not be immediately apparent during the inference period.
Your suggestion to compare steric clashes based on the resulting generated structures is great. We evaluated the clash number within 90% distance threshold, finding that joint diffusion did not reduce more clash pairs (Joint: 6.9 vs Autoregressive: 6.0). Furthermore, as shown in Table 4, the substantial improvements in practical applications achieved by autoregressive diffusion over joint diffusion demonstrate that autoregressive diffusion is a better choice in practical application.
___
**Q4: Loss values of different methods in Fig. 5 are not comparable.**
Thank you for bringing this to our attention! We acknowledge that the loss values comparison in Figure 5 might indeed be misleading due to differing entropies as lower bounds across the methods, rendering the losses not directly comparable. We will certainly revise the figure in line with your valuable suggestions.
It's also worth emphasizing that the average training loss of autoregressive diffusion is bounded by the entropy of distribution $p\left(\boldsymbol{\chi}_1, \boldsymbol{\chi}_2, \boldsymbol{\chi}_3, \boldsymbol{\chi}_4\right)$, as is the loss of joint diffusion. While the original comparison might have been imprecise, we believe that a rough comparison can still shed light on the relative optimization or training efficiency of autoregressive modeling versus joint diffusion. Coupled with the experimental results shown in Table 4, these findings affirm the efficacy of autoregressive modeling.
___
**Q5: The statement “We focus exclusively on ….” seems to contradict with the AlphaFold2 results presented.**
Thank you for raising the concern! The additional experiments with AlphaFold2 aim to showcase potential usages of our method on non-native backbones, which does not change the main focus of the paper. We will tune our statement in the revised version.
___
**Q6: How to extend DiffPack on non-native backbones?**
We simply adapt the DiffPack model trained on native backbones to AF-predicted backbones without any retraining. The results actually showcase the generalization ability of the proposed method. We will clarify this point in the revised version.
___
**Q7: In Sec. B, what is the distance threshold?**
Following Matthew et al.[45], the distance threshold between different types of atoms is initially defined by van der Waals radii and further adjusted to account for factors such as H-bond and disulfide bridges. We will add more details to related sections.
___
**Q8: Issues about computational feasibility**
Thanks for your question! We’ve conducted an experiment for the inference speed between different methods. Details can be found in attached materials and global rebuttal block.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the response, I am glad my comments were helpful and I appreciate the effort in clarifying the relation of the method to previous work. I have raised my score to 6.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: We're glad that our replies have been able to address your concerns, and we sincerely appreciate your efforts to enhance the quality of the work. Thank you once again! | Summary: This paper focuses on the task of sidechain packing in proteins, where one wishes to predict the positions of the side-chain atoms given the positions of the backbone atoms. To this end, the main contribution of the paper is DiffPack, an extension of recently proposed torsional diffusion models to the side-chain packing task. This is achieved by combining three aspects:
i) autoregressive sampling of the four side-chain torsion angles, with a separate diffusion model trained for each, conditioned on the previous torsion angles
ii) multi-round and annealed temperature sampling to improve quality of generated samples
iii) Modifications to the transition kernel formulation on the torus for residues where the torsion angles have a periodicity of $\pi$.
Experimental performance compared to various baselines for side-chain sampling showcases the improved performance offered by DiffPack.
Strengths: 1. The paper is very well written, clear and easy to understand. The technical choices made throughout the paper are sound and well motivated, and the application area considered is well suited for the torsional diffusion framework.
2. The authors compare their method to a variety of baselines for side-chain packing and showcase improved experimental performance of their method.
Weaknesses: 1. The technical contributions are largely incremental - the formulations regarding torsional diffusion have already been well explored in previous recent papers.
2. Some questions / clarifications regarding the experimental evaluation:
* It is unclear to me the benefits offered by using auto-regressive models for each torsion angle as opposed to joint diffusion. From Table 1, 2 and 4, one can see that, for about 4$\textdegree$ variation in MAE of $\chi_1$, the RMSD drops by 0.1. From Table 4, the $\chi_1$ MAE between the autoregressive model and the joint diffusion model is about 2$\textdegree$, giving a much smaller corresponding drop in RMSD. Using auto-regressive models definitely seems to accelerate training / convergence as noted in Fig 5. However, the errors in discretization during sampling, idealization of bond lengths and angles when reconstructing coordinates could end up reducing the improved training performance of the autoregressive models, and eventually offer similar values in RMSD.
* Could the authors add a table regarding the run times associated with the evaluations of the different deep learning baselines? How many samples are generated for each protein before selection with the confidence model?
3. An anonymous link for the code submission is not provided, making it harder to verify reproducibility.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the above section for questions.
Edit: I have read the authors rebuttal, and post the discussion phase, still maintain my assessment about the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have addressed limitations associated with their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your suggestions! Here is our response to your concern.
___
**Q1: Potential Concern in Incremental Technical Contribution**
Thanks for your review! Formulation of diffusion models on Riemannian manifold(e.g. torsion space) was first proposed by Valentin De Bortoli[1] and further applied in various domains such as molecule conformation generation[2], antibody optimization[3], molecule docking[4] and protein backbone design[5]. However, to our knowledge, **DiffPack represents the first work of Riemannian diffusion model in the domain of protein side chain packing**, which significantly improves the prediction accuracy, as demonstrated in our experiments.
1. **Compared with other work where the Riemannian diffusion model is applied, protein side-chain packing presents a much higher degree of freedom.** Earlier studies have focused on either small molecules [2][4] (averaging ~50 atoms) or residue-level proteins [5] (averaging ~200 residues). In our case, we had to model an average of 3000 atoms. Even though we constrained the degree of freedom by choosing the torsion space as the model output, this complexity still far exceeds previous scenarios.
Coupled with the issues discussed in Section 3.3 (Cumulative Coordinate Displacement and Excessive Steric Clash), this complexity posed a significant challenge to directly train and sample from a diffusion model in this space. As shown in Figure 5, directly optimizing a joint diffusion model suffers from the underfitting problem. To address this problem specific challenge, autoregressive modeling is introduced to factorize the joint distribution into a product of conditional distribution. This significantly alleviated the under-fitting problem.
2. **Compared with previous work in protein side-chain packing, DiffPack is the first method modeling the side-chain problem in a generative manner.** Previous regression based methods (AttnPacker, DLPacker) tend to predict the “mean conformation”, while our proposed method could capture the entire distribution of side-chain conformation. Experiments have validated the effectiveness of our generative modeling.
[1]. De Bortoli, Valentin, et al. "Riemannian score-based generative modelling." Advances in Neural Information Processing Systems 35 (2022): 2406-2422.
[2]. Jing, Bowen, et al. "Torsional diffusion for molecular conformer generation." Advances in Neural Information Processing Systems 35 (2022): 24240-24253.
[3]. Luo, Shitong, et al. "Antigen-specific antibody design and optimization with diffusion-based generative models for protein structures." Advances in Neural Information Processing Systems 35 (2022): 9754-9767.
[4]. Corso, Gabriele, et al. "Diffdock: Diffusion steps, twists, and turns for molecular docking." International Conference on Learning Representations (2023).
[5]. Wu, Kevin Eric, et al. "Protein structure generation via folding diffusion." (2022).
___
**Q2: Issues regarding the benefit of autoregressive modeling**
We appreciate your insights and would like to elaborate on the advantages of autoregressive modeling in our context. Protein sidechain packing indeed poses a formidable challenge due to its vast degree of freedom. When training a model on the joint distribution directly, the computational complexity can grow exponentially with the number of variables due to the "curse of dimensionality". Besides, the additional challenges(Cumulative Coordinate Displacement and Excessive Steric Clash) also impose negative influence for directly modeling the joint diffusion process. All of the above will finally lead to an underfitting problem. So in our work, autoregressive is introduced to alleviate this problem. By decomposing the joint distribution into conditional distributions, we simplified the denoising process, enabling us to capture the potential spatial dependencies between $\chi_k$ and $\chi_{i<k}$.
Regarding ablation study performance, we respectfully disagree with the reviewer about the improvement achieved by our method. It can be clearly observed that the MAEs of all four angles significantly drop after replacing joint diffusion with autoregressive diffusion on CASP14. Additionally, we provide Atom RMSD results in Table A, illustrating a reduction of approximately 0.12 in RMSD, which is already significant in sidechain packing.
Table A: Ablation Study on CASP14.
|#Method|$\chi_1$ MAE|$\chi_2$ MAE|$\chi_3$ MAE|$\chi_4$ MAE|Atom RMSD|
|:----:|:----:|:----:|:----:|:----:|:----:|
|**DiffPack**|**21.91**|**25.54**|**44.27**|**55.03**|**0.770**|
|w/ joint diffusion|26.80|34.51|52.77|63.41|0.893|
Finally, it's worth noting that autoregressive models aren't necessarily "better" or "worse" than other types of models. They are simply better suited to certain types of problems
___
**Q3: Issues regarding running times**
Thanks for your valuable suggestions! We have included the running time information in Table 2 of the attachment. In short, DiffPack is the best model for balancing speed and accuracy. You can find more details in the global rebuttal section and attachment.
| Methods | Inference Time (s) ↓ | Angle Accuracy ↑ |
|---|:---:|---|
| SCWRL | 2.71 | 56.2% |
| FASPR | **0.10** | 56.4% |
| RosettaPacker | 217.80 | 58.6% |
| DLPacker* | 28.50 | 58.8% |
| AttnPacker* | 6.33 | 62.1% |
| DiffPack_vanila* | 1.29 | **67.0%** |
___
**Q4: How many samples are generated for each protein before selection with the confidence model?**
Thanks for your questions! We sample 4 conformations for each protein. The value is chosen to balance the speed and accuracy as shown in the attached Figure 1. Additional details of inference process can be found in Appendix F.2
___
**Q5: Issue about code and reproducibility**
We have attached the anonymous code link(https://anonymous.4open.science/r/DiffPack-DED9/). Thank you for raising the concern!
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I thank the authors for their efforts with the rebuttal process. Below follows a point-by-point response to the rebuttal:
**Q1**
Thank you the comments, the increase in scale is indeed a valid point which I missed to include before, but some concerns proposed to deal with the scale still remain (more in Q2).
**Q2**
Thank you for the clarification! I understand the issues associated with sidechain packing as you outlined and the utility of autoregressive models in this situation, but if they were to indeed mitigate the issues to the extent as expected, the peformance should be considerably better. Could you provide some references as to ranges of RMSD values in sidechain packing that qualify as poor vs satisfactory?. In the protein structure prediction tasks, <1A RMSD is already considered near native for instance.
For a task utilizing diffusion models, 0.12A RMSD could just be a byproduct of the noise scale at the final step.
**Q3, Q4 and Q5**
Thank you, these experiments definitely shed more light on the utility of the DiffPack as a fast & accurate sampler.
Time permitting question this, but have the authors carried out any quantitative analysis of the steric effects in the predictions from models with and without autoregressive diffusion?
---
Reply to Comment 1.1.1:
Title: Thanks for your comment!
Comment: Thank you for your thoughtful comments and questions. We appreciate the opportunity to provide further clarifications and context for our work.
In protein sidechain packing, Angle Mean Absolute Error (Angle MAE) and Atom Root Mean Square Deviation (Atom RMSD) are commonly used metric. As cited in references [1][2][3], a sidechain is considered to be accurately predicted when the error is less than or equal to a predefined threshold, which is typically 20° or 40°. In our work with DiffPack, the model achieved an angle accuracy of **57.5%** using a 20° threshold, compared to **49.9%** for DiffPack without autoregressive diffusion.
To further clarify your concern about RMSD value, we would like to emphasize that, in the context of protein sidechain packing, the predicted atom positions are confined to the region of the residue. As a result, the RMSD values are expected to be significantly lower than those observed in protein protein backbone-related studies. To put it into perspective, even traditional sidechain packing methods have been known to achieve around 1Å RMSD, which may seem unexpectedly low when compared to backbone prediction tasks.
[1]. Dunbrack Jr, Roland L., and Martin Karplus. "Backbone-dependent rotamer library for proteins application to side-chain prediction." Journal of molecular biology 230.2 (1993): 543-574.
[2]. Colbes, José, et al. "Protein side-chain packing problem: is there still room for improvement?." Briefings in bioinformatics 18.6 (2017): 1033-1043.
[3]. Jumper, John, et al. "Highly accurate protein structure prediction with AlphaFold." Nature 596.7873 (2021): 583-589.
>**Time permitting question this, but have the authors carried out any quantitative analysis of the steric effects in the predictions from models with and without autoregressive diffusion?**
Thanks for pointing out this! We have conducted an experiment to evaluate the steric clash in structures generated by models with and without autoregressive diffusion. Specifically, when assessing the number of clashes within a 90% distance threshold, we observed that both of them have achieved relatively low number of clash pairs (**6.9** for the model without autoregressive diffusion, and **6.0** for the model with autoregressive diffusion). We will include these findings in our revised manuscript following your valuable suggestions. | Summary: The protein side-chain packing problem consists of predicting the positions of atoms in amino-acid side chains given the backbone structure and residue identities. The paper proposes to do this using a diffusion model that accounts for physical constraints and models side chain structures as joint distributions over torsional angles. Furthermore, they find that generating the torsional angles autoregressively improves the generation quality. DiffPack outperforms existing methods with fewer model parameters and can enhance side chain predictions from AlphaFold2.
Strengths: The paper provides a unique solution to the important and well-studied protein side-chain packing problem. The writing is generally clear, and the work is well-contextualized in the literature. The autoregressive diffusion framework is intuitively effective, and the formulation seems correct. This is backed up by strong empirical results when comparing to previous work and in the ablations. Providing confidence scores, reducing the number of parameters compared to previous models, and being able to refine AlphaFold2 predictions will make the method very useful for downstream practitioners.
Weaknesses: In general, this is a strong submission with few major weaknesses.
### Major
- Clarity: it's a bit unclear to me how the model moves between atomic coordinates in GearNet and predicting the scores and confidences on each angle. A few more lines of text or a subfigure could be very helpful here. Likewise, it would be nice to have some more details about model size, training hyperparameters, and training hardware.
- Soundness: The paper talks about confidence scores and shows that they improve generations. It would be better to also have a table or figure showing how well the confidence scores are calibrated.
- Soundness: are the baselines also given the chance to generate multiple conformations and then to have a confidence model pick the best one? If not, the comparisons are not quite one-to-one.
- Significance: While I appreciate the case studies shown in 5.5, the paper would be stronger with more context here. What downstream biological or engineering applications, if any, can DiffPack do that are not accessible with existing methods?
### Minor
- There are a few minor points on clarity: The standard in the field seems to be DDPM (denoising diffusion probabilistic models) instead of DPM, as used in line 252. In Figure 4, the legend says blue and yellow, but I see blue and red in the figure. In Figure 5, the colors for the four autoregressive curves are very difficult to tell apart -- it might help to also vary the linestyle.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Is there any intuition for choosing the variance exploding SDE instead of variance preserving?
- How well-calibrated are the confidence predictions?
- What hardware was used to train the model? How long was it trained for?
- What downstream biological or engineering applications, if any, can DiffPack do that are not accessible with existing methods?
- How exactly does the model move between atomic coordinates and score / confidence predictions?
- When generating, how many different conformations are being generated and passed to the confidence model?
- What work remains to be done in side-chain packing? Does DiffDock completely solve the problem?
- How hard would it be to train an order-agnostic autoregressive diffusion here? Would we expect that to produce better samples than a model with a fixed decoding order, as shown here?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: - The authors should address where DiffDock still does not solve the side chain packing problem, either in general or in specific cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and review! We think that the majority of your detailed concerns about our method are covered in the Appendix. We'll revise the order as you've recommended to prevent any confusion. Here's our response to your concerns.
___
**Q1: Clarity about GearNet Score Prediction**
For atom representations, we build a graph combining bonds and 3D coordinates. GearNet-Edge is used on this graph. Residue representations are obtained by averaging atom embeddings. These feed into an MLP for score function and confidence score output. We've addressed this in Sec. 3.4 and will provide more details.
___
**Q2: Calibration of Confidence Scores**
We've done extra experiments to assess the confidence score calibration in the global rebuttal, as per your suggestions.
___
**Q3: Are the baselines also given the chance to generate multiple conformations? If not, is the comparison fair?**
Thanks for noting this! In the benchmark, traditional methods like SCRRL4, FASPR, and RosettaPacker produce multiple conformations, choosing the best based on an energy function. In contrast, deep learning methods except DiffPack are end-to-end regression models, yielding one pose at a time.
It’s worth noting that DiffPack is the first deep learning method that solves the problem with generative modeling. That’s also one reason that makes our method more effective than others – we capture **the whole conformation distribution** instead of “**the mean conformation**”. As such, predicting side-chain conformation essentially involves sampling from high probability regions of the distribution, while confidence selection helps in identifying these regions.
In contrast, regression-based models such as AttnPacker and DLPacker directly predict the final pose as a result of their regression-based intrinsic design. So there is no need for these models to generate multiple conformations
To further address your concern about potentially unfair comparisons between regression-based and generative models, we've conducted two additional experiments for a more direct comparison shown in attachment Tab. 1:
1. **Limiting DiffPack’s generation samples to 1**: We constrain DiffPack to generate only one sample, excluding confidence score selection. From the experiment results below, we can see that our models still outperforms other methods by large margin. This is due to the fact that sampling from a probability distribution is typically biased towards regions of higher probability density, without any additional selection module.
2. **Allowing other deep learning based models to generate multiple samples**: Despite the intrinsic design of regression-based models not supporting multiple conformation generation directly, varied seed and initialization can lead to different generation results. In this experiment, we enabled these models to generate multiple samples with oracle selection. Even when selecting the best predicted pose based on the ground truth RMSD, the overall results remained largely unchanged, with DiffPack outperforming the competition.
We hope this detailed explanation and the additional experiments satisfactorily address your concern!
___
**Q4: What downstream biological or engineering applications, if any, can DiffPack do that are not accessible with existing methods?**
Essentially, DiffPack does not directly solve a new problem, but instead enhances sidechain packing with significant performance improvement. This allows for applications in areas requiring accurate modeling of sidechain conformation, such as protein-protein docking and protein mutation prediction. Further details on potential applications are available in App. A.1.
___
**Q5: What's the intuition behind choosing the variance exploding SDE instead of variance preserving?**
The choice of a variance exploding stochastic differential equation (VE-SDE) over variance preserving (VP-SDE) is primarily due to the nature of non-Euclidean torsion space $\mathbb{T}$, where defining an origin isn't straightforward. In traditional Euclidean spaces, the drift term $f(x,t) = -1/2x$ of VP-SDE can target the origin, but in $\mathbb{T}$, this isn't easily feasible. With VE-SDE, the drift term is neglected, making it a favored option in these contexts. While other options exist, VE-SDE is one of the simplest and most effective solutions available.
___
**Q6: What hardware was used to train the model? How long was it trained for?**
The training of our model was conducted on 4xA100 GPUs. The training spanned over a total of 400 epochs, with the entire process taking approximately 4 days for each model. Specific training details are included in App. F.2.
___
**Q7: How many different conformations are being generated and passed to the confidence model?**
Our approach samples four different conformations from the diffusion models during sampling, which are subsequently evaluated by the confidence model. More details are provided in App. F.2.
___
**Q8: What work remains to be done in side-chain packing? Does DiffPack completely solve the problem?**
Regarding side-chain packing, our model is accurate for various applications, but there's room for improving side-chain conformation prediction. For instance, enhancing results on non-native AlphaFold2 backbones is crucial, especially for generating complete atomic conformations in de novo designed sequences.
___
**Q9: Is it promising to train an order-agnostic autoregressive diffusion here?**
While an order-agnostic autoregressive diffusion model has been proposed in previous work[1], we believe it is not suitable for the domain of sidechain packing. As elucidated in Sec. 3.3, the challenge of cumulative coordinate displacement can be mitigated only by fixing the order from $\chi_1$ to $\chi_4$. Without this specific ordering, the underlying issues would persist and likely hinder the training process of the model.
[1]. Hoogeboom et al. "Autoregressive diffusion models." arXiv, 2021.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the thorough rebuttal. I am especially pleased with the analysis of the confidence scores and multiple chances for baseline methods. Pending discussion with the other reviewers, I am inclined to raise my score to 7. | Rebuttal 1:
Rebuttal: We would like to extend our sincere gratitude for your time and thorough review of our paper. Your insightful suggestions and comments have provided us with valuable perspectives, enabling us to enhance the quality of our work. As there is some common issues raised, we choose to respond to them in the global block.
___
**Common Issue 1: Reproducibility of DiffPack**
Acknowledging the necessity for reproducibility, we have attached the anonymous code (https://anonymous.4open.science/r/DiffPack-DED9/) at this stage. We believe this will address concerns and foster reproducibility within this field.
___
**Common Issue 2: Calibration of Confidence Scores in Confidence Selection**
Several reviewers have inquired about the specifics of confidence selection. To clarify, we conducted additional experiments to demonstrate how confidence selection elevates sampling quality and to assess the calibration of the predicted confidence score.
Specifically, we selected 10 proteins randomly from the test set (each with over 150 residues) and plotted angle accuracy against an increasing number of samples. As depicted in Figure 1, the sampled conformation quality consistently improves with an increased number of samples, notably when the number of samples is <= 5, substantiating the efficacy of our proposed confidence selection.
To further demonstrate how well-calibrated is the predicted confidence score, we have measured the correlation between the predicted confidence score and the negative RMSD, yielding **Pearson coefficient 0.664** and **Spearman coefficient 0.798**, respectively. We plan to present these results in the revised version of paper following your valuable suggestions.
___
**Common Issue 3: Inference Speed and Computation Feasibility in DiffPack**
The computational feasibility of DiffPack has garnered attention from many reviewers.We performed supplementary experiments to benchmark the inference speed of different methods. Methods are categorized as GPU-based or CPU-based. All GPU-based methods(DLPacker, AttnPacker, DiffPack) were evaluated on an NVIDIA RTX A100 40GB GPU, while CPU-based methods were assessed on an AMD EPYC 7513 32-Core Processor @ 2.60 GHz. For algorithms that facilitate batch processing, the batch size was meticulously chosen and fine-tuned to an optimal value, taking into account specific computational requirements.
It's essential to recognize that the inference time for DiffPack is highly influenced by the number of samples in confidence selection and the number of rounds in multi-round sampling. We denote DiffPack-vanilla as the model that samples one conformation without confidence selection. As illustrated in attached Table 2, DiffPack-vanilla outperforms other GPU-based methods in terms of both speed and performance. Although the CPU-based method FASPR achieves superior speed through judicious optimization within its tree search algorithm, its restricted angle accuracy limits its applicability in contexts where precision takes precedence over speed especially in sidechain packing.
DiffPack's flexibility allows for integration with various supplementary techniques as shown in Section 3.5, trading off some speed for enhanced performance. For a detailed comparative analysis, we provide Figure 2 that elucidates the interplay between DiffPack's speed and corresponding performance. Among the variations of DiffPack, we refer to the model amalgamated with confidence selection as DiffPack-confidence, and the one further augmented with multi-round sampling as DiffPack-multiround.
It is noteworthy that DiffPack's speed is primarily constrained by the multi-step nature of the denoising process. Significant research efforts[1][2] have been directed towards accelerating this process. Pursuing integration with these speed-enhancing methods constitutes an exciting avenue for future research and potential further optimization.
[1]. Song, Jiaming, Chenlin Meng, and Stefano Ermon. "Denoising diffusion implicit models." arXiv preprint arXiv:2010.02502 (2020).
[2]. Lu, Cheng, et al. "Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models." arXiv preprint arXiv:2211.01095 (2022).
___
Once again, we wish to express our profound gratitude for the meticulous review and constructive feedback. Your guidance has been instrumental in refining our work, and we welcome any further questions or comments you may have.
Pdf: /pdf/98c5c04761fe56f8a4b7c70e38fa9717a4fb547a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Why Differentially Private Local SGD -- An Analysis of Synchronized-Only Biased Iterate | Reject | Summary: This paper studies DP-LSGD and compares its performance with DP-SGD. The paper first provide the convergence result of FedAvg under the bounded variance assumption (Theorem 3.1 and 3.2) and provide the convergence analysis of DP-LSGD-GC under the bounded gradient assumption 4.1 and similarity assumption 4.2. for both convex and non-convex cases. The results imply that using multiple local updates, DP-LSGD converges faster to a neighborhood of the stationary point. Through numerical experiments, the paper demonstrates that DP-LSGD converges faster than DP-SGD with the same privacy budget and "communication" iterations.
Strengths: Originality: this paper provides a novel analysis of DP-LSGD, which considers both clipping and DP-noise. The resulting convergence rate improves upon the existing FedGD algorithm.
Significance: This paper provides a DP algorithm (DP-LSGD) that outperforms DP-SGD with faster theoretical convergence to a neighborhood of stationary points.
Clarity: This paper provides a clear statement of the theorems, assumptions, and adequate numerical justification for the assumption used in the proof.
Weaknesses: Assumption 4.1: This assumption assumes that the clipping error is bounded, which might be too strong. In Fig 1 (a), it seems like $\Phi$ is increasing as $t$ increases. Such an assumption simplifies the analysis of gradient clipping; Thus, it weakens the significance of the paper a bit.
Comparison with FedAvg: In general FedAvg considers local SGD updates, while the analyzed DP-LSGD algorithm considers local GD updates. Therefore, such a comparison is unfair. [R1] also provides the convergence rate of FedAvg, which matches the rate in this paper. Therefore, the convergence part without clipping is hard to be said to be an improvement.
It is unclear whether the numerical comparison is fair or not. It is hard to decide if the reported result for DP-SGD matches the SOTA results (e.g. in [R2]). The authors should report how the hyper-parameters are chosen and whether they are optimal for the algorithm.
The theoretical result suggested that $c = \Theta(\eta)$ and at the same time $B = O(c)$. However, it is unclear if these two results can be satisfied at the same time. The authors should also conduct numerical justification on different choices of $\eta$ and $c$ and report the corresponding $\Phi$.
[R1] Glasgow, M. R., Yuan, H., & Ma, T. (2022, May). Sharp bounds for federated averaging (local SGD) and continuous perspective. In International Conference on Artificial Intelligence and Statistics (pp. 9050-9090). PMLR.
[R2] De, S., Berrada, L., Hayes, J., Smith, S. L., & Balle, B. (2022). Unlocking high-accuracy differentially private image classification through scale. arXiv preprint arXiv:2204.13650.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As listed above, please clarify
1) how $B$ changes with different $c$ and $\eta$ and
2) whether the reported performance of DP-SGD matches with SOTA results.
3) For non-convex cases, why the algorithm converges to saddle points? (I believe this should be a typo?)
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the empirical efficiency limitation of the proposed algorithm, which I believe is the largest limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments which also inspire us to properly describe the strength of DP-LSGD.
1). On Assumption 4.1: We apologize for the confusion. Actually we do not assume a bounded clipping error; instead we only assume its second moment is bounded. We will stress this in a revision. The details of how we address this challenge and avoid the bounded gradient assumption compared to prior works can be found in Item 1) of our response to Reviewer SLyE.
2). On local stochastic gradient and comparison to prior works: We have partially explained this in the global response and response to Reviewer 624r, where our results can easily capture the local stochastic gradients case; and when we compare with existing works, we also compare to the case of full local gradient. In addition, thanks for pointing out the reference “Sharp Bounds for Federated Averaging and Continuous Perspective”, which provides nice lower bounds for federated learning. We will add comparisons to it in a revision. As a summary, we totally agree the improvement from the optimization side is not the key contribution or the main focus of this paper. All our results are presented to explain the better convergence rate and clipping bias from DP-LSGD and hopefully provide systematic instructions on future improvement of DP optimization.
3). Comparison with state-of-art [36]: Thanks for this very sharp question. On one hand, in the experiments reported, we have optimized the hyper-parameter selections, including the clipping threshold $c$ and the iteration/phase number $K$ and $T$ for all the cases of DP-SGD and DP-LSGD. We will elaborate on this in Appendix F of a revision. In addition, as for the comparison with [36], we need to mention that the success of [36] heavily relies on the scaling with a very large batchsize. In [36] for CIFAR10, besides many other advanced deep learning tricks, one dominant issue is the large subsampling rate with extensive ($16\times$) data self augmentation. As discussed in the conclusion section, [36] selects a batchsize over 15,000 in each iteration of DP-SGD to sharpen the signal-to-noise ratio in a subsampled Gaussian mechanism. In contrast, all our reported results are produced using a small batch size of 1000, partially because our machine memory can only support such a setup for ResNet20. However, we want to mention, when we implement [36] in the same setup of a small 1000 batch size, when $\epsilon=8$, [36] only produces 70.6\%, comparable to our case using DP-LSGD when $\epsilon=4$. In the same setup, when $\epsilon=2$, DP-LSGD achieves 64.0\% accuracy on CIFAR10, while [36] can only achieve 60.2\%. Thus, we believe the framework of DP-LSGD with connections to federated learning is a promising direction for future DP optimization research.
4). Dependence on $c$, $B$ and $\eta$. Thanks for this very insightful question. First, as explained in Item 3) of our response to Reviewer WAtK, $B$ in Theorems 4.1 and 4.2 is an arbitrary term, which can be dependent on $K$. In practice, when $K$ is large with a larger step size $\eta$, the norm of the local update also becomes larger, which basically carries more information, and we also need to correspondingly increase the clipping threshold $c$ to produce the optimal performance. Usually, $c$ is selected as the average of the local update norm. We have added additional experimental results in the attachment.
For CIFAR10, Table 2 attached showcases the average $l_2$-norm of the local updates across different combinations of $K \in$ {1, 4 ,8, 12, 16, 20} and $\eta \in$ {0.01, 0.02, 0.03, 0.04, 0.05} over the initial $T=100$ phases. On one hand, for a given stepsize (within each column), a discernible trend emerges: the rate of increase in the $l_2$ norm of the local update decelerates as $K$ escalates. In other words, the ratio between the norm of the local update and the value of $K$ diminishes as the number of local gradient descent steps, $K$, grows. This observation lends credence to our assertion that the sampling noises originating from local gradients—despite their interdependence and evaluation on the same datapoint—tend to cancel out substantially. On the other hand, when focusing on a fixed value of $K$ (within each row), the norm of the local update maintains a linear proportionality with the step size $\eta$, which matches our intuition
In Table 3, we further compare the incremental norm bound $B$ with the corresponding clipping bias bound across various selections of $K$ and $\eta$. We adopt a clipping threshold in a form $c = 25 \cdot K \eta$, where the value of $c$ is scaled with $K$ and $\eta$. The choice of the constant $25$ is informed by our empirical tests, which demonstrate that this particular clipping parameter selection generally yields the optimal performance in the context of the CIFAR10 dataset. Within Table 3, we present the ratio $B/c$, which captures the bound on the clipping bias as outlined in Theorem 4.1. Evidently, larger values of $K$ tend to yield more concentrated local updates and enhanced clipping efficiency. Based on our observations, in real-world applications of DP-LSGD, a suitable choice is often $K=10$ along with $\eta=0.025$.
As revealed by Table 2, the $l_2$ norm of local updates exhibits a linear relationship with both $K$ and $\eta$. In cases where $K$ and $\eta$ are excessively large, it becomes necessary to correspondingly increase the value of $c$ in order to mitigate clipping bias. However, this simultaneously introduces disruptive high-dimensional DP noise. On the other hand, even for the non-private scenario without noise perturbation, LSGD typically demands a comparatively smaller learning rate to prevent local updates from excessively diverging. For substantially large $\eta$, we observe that the variance of local updates will significantly increase and the convergence becomes less stable.
Please feel free to let us know if you have any other questions! Thanks again.
---
Rebuttal Comment 1.1:
Comment: 1. Bounding second-order moment is equivalent to bounding both its magnitude and variance of the clipping error. $E(x^2) = (Ex)^2 + Var(x) \leq B \rightarrow (Ex)^2\leq B and Var(x)\leq B$.
Overall, I think my questions have been addressed. This is a delicate algorithm with promising theoretical result, yet hard to be used for practical large-scale DNN training.
---
Reply to Comment 1.1.1:
Comment: Thanks so much for your prompt reply! Hopefully we can optimize our code with a much smaller memory requirement in the near future. We appreciate your support for our new research direction to apply federated learning methods to systematically improve clipped DP learning! | Summary: This paper proposed a unified analysis of the convergence of (DP)-Local SGD, which covers (DP) parallel SGD as a special case with K=1, for both convex and non-convex optimization. Under this unified analysis, one can identify error effects due to non-iid objectives, clipping, and DP noises and the convergence rate to the error neighborhood.
Strengths: (1) The introduction section provides a brief yet insightful summary of related subjects including "relation between local SGD and parallel SGD", "sensitivity and privacy analysis methodology of DP", "effect and convergence limitation of clipping" etc.
(2) The unified framework that covers both local SGD and parallel SGD is attractive and makes sense at a high level.
Weaknesses: (1) The problem setting of this paper assumes each sampled worker is running gradient descent instead of stochastic gradient descent as the full gradient is available in equation (1) though later an additive stochastic noise is added after the aggregation of local updates in equation (2) and (3). This problem setting is much simpler than the standard setting of local SGD as less divergence across involved local workers is introduced. The comparison between the analysis from this paper, e.g., Thm 3.2, and the state-of-the-art is no longer fair. (All the state-of-the-art compared in this paper considered stochastic optimization. And it is well known that GD for deterministic non-convex has O(1/T) convergence while SGD for stochastic non-convex only has O(1/\sqrt{T}) convergence.)
(2) The assessment in line 313 that DP-LSGD converges faster with O(1/T) than DP-SGD corresponding to K=1 with O(1/\sqrt{T}) is unfair as the convergence rate from Thm 4.2 requires K=\Theta(T) such that the overall computation/iteration is TK = O(T^2) to attain an error decay like 1/T. This is effectively the same O(1/\sqrt{S}) convergence where S is the number of computation steps/iterations.
(3) The new assumption in Assumption 4.1 that involves \Phi in Definition 4.1 does not seem much different from a bounded 2nd-order moment assumption as \Phi_i measures how much the norm of update is larger than the clipping threshold. (Given that a bounded 2nd-order movement further implies a first-order moment by the inequality E[||X||] \leq \sqrt{E[||X||^2]}, I don't see how this new assumption can relax the widely used bounded gradient assumption in the literature.) Could the author discuss whether the new assumption strengthens or relaxes the standard assumptions?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See my questions in the "Weaknesses" section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: I do not see any potential negative societal impact of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments.
1). Stochastic local gradient and the convergence rate of federated learning: Thanks for this very sharp question. We first answer the question about the generalization of our results with local stochastic gradients. We totally agree, in a complete picture of DP-LSGD, there exist three types of noise: (a) noise from sampling over nodes (in each phase, different nodes will be randomly selected to participate in the computation); (b) noise from sampling over local samples from each selected node (local stochastic gradient); (c) DP noise for perturbation. In this paper, to provide a clear comparison between DP-LSGD and DP-SGD in the centralized DP model, which is equivalent to a federated learning where each node has a single datapoint, we mainly take a) and c) into account since the local stochastic gradient b) is reduced to the full local gradient when there is only one local datapoint. However, as explained in the global response, our results indeed study a more generic scenario where DP-LSGD is perturbed by possibly biased noise and we always take a generic noise $Q$ in our analysis. If one applies additional local stochastic gradients accompanied by an independent zero-mean sampling noise of variance bounded by $\sigma^2_s$, then as pointed out by Reviewer WAtk, an additional term $O(\frac{\eta K\sigma^2_s}{n})$ will appear in Equation (4) in the paper. $O(\frac{\eta K\sigma^2_s}{n})$ captures the effect of accumulated sampling error from $K$ local stochastic gradients, and theoretically can be merged with the DP noise.
Second, we want to discuss more about the convergence rate of federated learning and its connection to DP-(L)SGD. We totally agree that in the centralized case, $O(1/T)$ convergence rate of full GD and the $O(1/\sqrt{T})$ rate of SGD have been well understood. However, things become much more complicated in the federated learning case, especially with heterogeneous data. Even in the non-private realm, there are many open questions about the optimal convergence rate when combining both (a) and (b) in Item 1) above, with sampling over both users and local samples. However, with careful error-feedback or variance reduction, LSGD with only (a), where we allow subsampling over users but a selected user still applies full local gradient descent, can achieve $O(1/T)$ convergence, for example Scaffold [30] and FedLin [43]. This suggests, theoretically without noise and clipping, LSGD using full local gradient can outperform standard SGD $O(1/\sqrt{T})$ rate with proper assumptions. In this paper, still in the same setup for clipped DP-LSGD with (a,c), we show it can converge to a neighborhood of minimum at rate $O(1/T)$.
A more important open question is that when the DP-noise is sufficiently small, can clipped DP-LSGD converge at a rate of $O(1/T)$ to the optimum “without clipping bias”? That is why we argue, in the conclusion section, to connect both the research in DP-SGD and federated learning and study whether the variance reduction method used for handling data heterogeneity can cancel out the clipping bias. Though we have tried to incorporate those methods in, such as Scaffold [30] and FedLin [43], into clipped LSGD, unfortunately, we find that current methods cannot be trivially modified to produce bounded sensitivity and thus enable efficient utility-privacy tradeoffs. But we believe this will be a promising direction to systematically improve DP-SGD for deep learning.
2). $O(1/T)$ convergence rate: Thanks for this very sharp question. We apologize for the confusion and as pointed out by Reviewer 7rJQ, the correct statement should be: DP-LSGD converges faster at the same privacy budget or “communication” iterations. As we explained in the introduction, DP optimization under the current analysis "white-box" framework shares many common questions/concerns with federated learning. The DP noise (computed using DP composition) added to each local update is determined by the total number of phases $T$, and thus for a smaller noise, we want a faster convergence rate in terms of $T$, which is equivalent to the communication overhead concern in federated learning, where we want faster convergence in “less communication” rounds $T$. We totally agree with your point that our results do not improve the entire iterations required, where DP-LSGD and DP-SGD still theoretically require the same order of computation complexity. But the local iterations in DP-LSGD will not contribute to the noise bound. We will properly modify this statement in a revision.
3). Technical and theoretical improvement: Due to the length limits, the technical challenges behind only assuming second-moment bounded local updates can be found in the global response and Item 1) of our response to Review SLyE. Moreover, we believe our most important contribution is not proving the convergence using the weaker second moment assumption, but the usable and explainable theory. We are not against the analysis of DP-(L)SGD using bounded gradients. Our concerns are that after assuming Lipschitz continuity, we may not properly characterize the difference between the clipping bias produced by DP-LSGD and DP-SGD: either no clipping happens when $c$ is larger than the Lipschitz constant, or we will not be able to explain what we can learn from the nonintuitive convergence rate/bias depending on the unknown or even nonexistent Lipschitz constant to further improve DP optimization.
Please feel free to let us know if you have any other questions. Thanks again.
---
Rebuttal 2:
Title: Follow-ups to Reviewer 624r
Comment: Dear Reviewer 624r
We want to thank you again for your comprehensive comments. As we approach the final days of the discussion phase, we are eager to ascertain whether we have effectively addressed your concerns and whether any further inquiries remain.
In the rebuttal, we have shown that our framework is even more generic which characterizes the effect of biased perturbation. Notably, we highlight that the scenario involving local stochastic gradients with independent zero-mean sampling noise is seamlessly encompassed as a special case. We also explain our motivation without including the stochastic gradient, since we try to provide a clear comparison between DP-LSGD and DP-SGD in centralized DP scenario where equivalently each client only has one datapoint.
Moreover, we have provided enhanced clarity regarding the benefits inherent to clipped DP-LSGD, which converges faster at a lower privacy budget compared to DP-SGD We hope that this clarification will also offer a more intuitive understanding of our pursuit in unifying the analysis framework. DP-SGD and federated learning share similar underlying concerns expressed through different terminologies—composite privacy leakage in DP and communication overhead in federated learning. This perspective illuminates the potential for cross-pollination of distributed optimization concepts to systematically enhance DP learning.
Our efforts further extend to the elucidation of the motivations underpinning our introduced assumptions. We do not arbitrarily propose these assumptions and our goal is not simply to improve prior works with these weaker assumptions, either. We develop theory to explain the practice, where we have established a foundation for comprehending the empirical convergence phenomena of clipped SGD. We show DP-LSGD produces more concentrated local updates, consequently resulting in heightened clipping efficiency. As you pointed out, technically our assumptions merely require the bounded second moment for local updates.
Finally, if your concerns are all properly addressed, we really hope that the reviewer can positively re-evaluate our work to support this research direction. We appreciate your inputs and we thank you for your time spent reviewing. | Summary: This paper propose a differentially-private local stochastic gradient descent both for centralized and distributed settings. The authors argue that the proposed method has less number of clipping and in turn produce less clipping bias compared to its counterpart DP-SGD which do not involve local steps. They also show that DP-LSGD converges sublinearly to a ball of the optimum, which is claimed to be faster than that of DP-SGD, and exhibit a better utility-privacy tradeoff.
Strengths: - The authors characterize the convergence performance by regarding clipping noises as biased noises and assuming that the incremental norm of local update be bounded, which is otherwise not easy to deal with.
- The authors prove that the proposed DP-LSGD converges faster than that DP-SGD and empirically show that it also has a better utility-privacy trade-off.
- The paper is technically sound and well organized.
Weaknesses: - Assumption 4.1 and 4.2 seem restrictive to the reviewer. In particular, Assumption 4.1 seems not practical in the sense that one can not ensure the boundedess of the incremental norm of $\nabla w$ without knowing in advance the basic convergence of the algorithm (note that the algorithm may diverge, making $\nabla w$ unbounded); Assumption 4.2 is assumed to be hold for any value of w instead of the optimum w*, which is more common to be adopted.
- Theorem 4.1 and 4.2: the authors claim that DP-LSGD enjoys faster convergence to a neighborhood of the global optimum/ saddle point than DP-SGD; however, local sample-level differential privacy is guaranteed for DP-SGD, but not for DP-LSGD. For a fair comparison, each client for DP-LSGD should clip the calculated gradient and add the DP-noise at each SGD step in local update to satisfy the local sample-level differential privacy, which will inevitably degrade the convergence performance of the algorithm. In that case, what are the advantages of DP-LSGD compared to DP-SGD and does DP-LSGD still produce less clipping bias than DP-SGD?
- The selection of many parameters such as B, \eta and c lack of intuition; also, the experiment does not corroborate the theoretical result very well; for instance, the clipping bias captured by $\mathcal{B}$ is independent of $K$ (c.f., Theorem 4.1) and thus can not reduced with increasing value of $K$, which is inconsistent with the claim that DP-LSGD Produces Less Bias.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - It is not surprising that the authors can avoid the assumption of bounded gradient with the introduction of bounded variance of stochastic gradient, which is well known in the existing literature; what are the new technical novelty in the convergence analysis?
- How one can determine the clipping threshold c and $\mathcal{B}$ such that $\mathcal{B}$ is at the same order as c?
- In Theorem 3.2, the authors claim that their iteration complexity to reach an error is tighter than the state-of-the-art results in [a] when there is no perturbation. This comparison seems unfair in that this paper only considers gradient variance among clients (Assumption 2.1) instead of the gradient sampling variance among datapoints, while both of them are considered in [a]. Note that, in this case, $\eta^2$ in the second term of (4) will become $\eta$.
[a] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, pages 5132–5143. PMLR, 2020.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: please refer to weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments.
1). Regarding Assumptions 4.1 and 4.2: With regards to Assumption 4.1, we apologize for the confusion caused. Actually, we did not assume the incremental norm is globally bounded but only a bounded second moment. It is worth noting that Assumption 4.1 can be seen as being equivalent to the condition where the second moment of the local update is confined. To illustrate, let us consider the scenario where we set $c=1$, and the expected value of the incremental norm—essentially the norm of the local update extending beyond the unit $l_2$-ball, is bounded by $2$. As a result, the expected value of the $l_2$ norm of the local update is bounded by $3$. The rationale behind our selection of the incremental norm for elaboration stems from it is better to intuitively capture the heightened concentration of local updates produced by DP-LSGD.
Second, for Assumption 4.2, we totally agree this is a bit artificial but we only use this for DP-(L)SGD in convex optimization. Since we do not assume bounded gradient or Lipshitz continuity, to handle generic biased perturbation $\Delta$, we have to assume certain similarity between functions, otherwise we cannot bound the utility loss $f_i(x)-f_j(x+\Delta)$ with only a smooth assumption. Since we are the first work to study such biased clipping error without a bounded gradient assumption, we did not find other replaceable assumptions from prior works. But we will definitely consider more practical assumptions in our future work.
2). Comparison between DP-SGD and DP-LSGD in different models: As partially explained in the global response, when we present our results, we consider a very generic perturbation term $Q^{(t)}$ across the iterations in Equations (2) and (3) in the paper, which can capture many kinds of noise: such as clipping bias, sampling noise, compression/quantification error and DP noise. Thus, for different privacy models, the only difference is that the injected noise will be different. For example, if each local update is clipped to 1, for a total of $T$ phases/communications/releases, and for a centralized dataset with $n$ samples, DP-(L)SGD adds $O(\frac{q \sqrt{T}}{nq})$ noise to the aggregate updates given a sampling rate $q$; for sample-wise local DP, a node with $m$ local samples will add a noise $O(\frac{q_0 \sqrt{T}}{mq_0})$ to its local update produced by either DP-SGD or DP-LSGD, where $q_0$ is the local sampling rate over the $m$ local samples; for strict local DP where each node has a single datapoint and does not trust anyone else, it needs to add a noise $O(\sqrt{T})$ to the released local update, still either by DP-SGD or DP-LSGD. Thus, our results can capture the utility-privacy tradeoff in any privacy model by just plugging in different noise scales $\sigma$. More importantly, in the same setup with the same clipping threshold $c$, the noise injected for DP-SGD is identical to that for DP-LSGD. The only difference is whether one adopts DP-SGD to clip and expose the local gradient or applies DP-LSGD to clip and expose the local update formed by $K$ local gradients. Hence, back to your question, though in this paper we focus on centralized DP, DP-LSGD still outperforms DP-SGD in other privacy models in the same setup. We will add comments on this in a revision.
3). Relationship between $B$ and $K$: We are thankful for this sharp question. We need to mention that the $B$ in Theorems 4.1 and 4.2 is an arbitrary term, which can be a generic function of $K$ and is not necessarily some constant. We apologize for this confusion and will change the notation from $B$ to $B(K)$ for clarity.
4). New techniques to study DP-(L)SGD with only second-moment bounded gradient/local update: Thanks for this insightful comment. We totally agree that for non-private LSGD, the convergence rate with second-moment bounded gradients has been extensively studied. However, it remains challenging to study it in DP-(L)SGD with clipping or generally biased perturbation, and we are the first to address it. Due to the length limits, please refer to our responses to Reviewer SLyE. Also as partially explained in the global response, our goal is not to simply improve existing works with weaker assumptions, but to develop meaningful theory using simulatable or explainable quantities, such as the incremental norm, to instruct systematic improvement.
5). How to select $c$ such that it is of the same order as $B(K)$. This is a very insightful question. The reason we consider such a selection or situation is twofold. First, it is based on our empirical observations on the optimal hyper-parameters in practice. We find that either for DP-SGD and DP-LSGD, the optimal selection of clipping threshold $c$ is usually close to the average/median of $l_2$-norm of local updates. A similar observation has also been made in prior adaptive clipping works, such as “Differentially Private Learning with Adaptive Clipping”. Second, we assume it is also mainly to simplify Equation (5) in the paper to provide more intuition about the asymptotic utility-privacy tradeoff. The clipping bias can be captured by $\frac{B(K)}{c}$ and theoretically $B(K)$ is not necessarily in the same order of $c$. Thus, given a proper selection of $c$, the more concentrated local updates will produce a smaller ratio $\frac{B(K)}{c}$, leading to a smaller clipping bias. More details can be found in our attachment and our item 4) response to Reviewer 7rJQ.
6). Stochastic local gradient: Thanks for this sharp question. We have partially explained this in the global response where we still provide fair comparison with prior works in the same full-batch gradient setup. Due to the length limit, further details can be found in our item 1) response to Reviewer 624r.
Please feel free to let us know if you have any other questions. Thanks again.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response which has addressed most of the reviewer's concerns. The reviewer's remaining concerns are still on the restrictiveness of Assumptions 4.1 and the proper selection of certain important parameters, and would thus maintain the current score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response, and we are delighted that we have successfully addressed most of your concerns.
1. Regarding Assumption 4.1 (second-moment bounded incremental norm), we explained in our rebuttal that this assumption is technically equivalent to having a local update or gradient with bounded second moment. This is a well-established concept in non-private optimization research. But we are totally agree our work could be possibly further improved and please do not hesitate to share your thoughts or any further suggestions on how we could potentially relax this assumption to enhance our results. On another note, we hope that we have clarified our motivation behind considering the incremental norm. Our intention is not solely to improve existing private optimization work by utilizing this weaker second-moment assumption, though it presents several technical challenges as we explained. More importantly, we aim to develop valuable theory that can explain and guide the field of DP learning. The incremental norm provides a clear and intuitive way to understand and control clipping errors.
2. Regarding parameter selection, we fully agree with your perspective. While we have presented numerous asymptotic analyses regarding hyper-parameter selections and the achieved sharper convergence rates associated with privacy-utility tradeoffs, we acknowledge that practical deep learning often requires fine-tuning. In practice, constants do matter, even though DP-LSGD inherently offers the potential for more concentrated local updates and improved clipping efficiency. This is precisely why we emphasize our released code, which stands as the first PyTorch platform to implement DP-LSGD for practical deep learning tasks with competitive running times. This allows us to fine tune the optimal parameter and produce state-of-the-art performance. Additionally, we have included extra experiments in our attached document to illustrate what optimal hyper-parameter selections will look like in practice. For instance, we highlight that the clipping threshold $c$ should be proportional to the step size $\eta$ and the number of local iterations $K$.
In summary, we would like to express our sincere gratitude once again for your invaluable feedback. We truly appreciate your support for this innovative research direction by leveraging federated learning techniques to systematically enhance clipped DP learning. If you have any further questions or require additional information, please do not hesitate to reach out to us. Your input is highly valued.
Title: Additional comments on Assumption 4.1 and parameter selection | Summary: The paper focuses on Differentially-Private Local SGD (DP-LSGD), and studies its advantages over the foundational technique of DP-SGD. In particular, the authors show why DP-LSGD provides higher clipping efficiency and less clipping bias compared to DP-SGD. The authors start by showing a convergence analysis on the released iterates of LSGD under perturbations and a bounded variance assumption on the stochastic gradients. Next, they generalize the results to DP-LSGD, and show that DP-LSGD has a faster convergence rate near an optimum point compared to DP-SGD. Lastly, they show that DP-LSGD behaves as an efficient variance reduction of local update, and enables more efficient clipping compared to DP-SGD.
Strengths: 1. The authors focus on the important problem of improving the privacy-utility trade-offs for DP Learning.
Weaknesses: 1. The paper contains many theoretical results (Sections 3-5), and I have not been able to verify the correctness of any of the proofs in the Appendix, but after reading the paper it is not even clear to me whether the proofs use any techniques/ideas that are novel (and might be of independent interest), or use methods from prior works to obtain novel results for (DP-)LSGD.
2. The empirical evaluation is very limited, focusing only on image-classification settings (CIFAR10 and SVHN datasets). Given that the focus of the paper is on (DP-)LSGD which is a building block of (DP) Federated Learning, it might be useful to have experiments on FL benchmark datasets, e.g., StackOverflow, EMNIST, etc.?
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Listed in the weaknesses section.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments.
1). First, we apologize for the confusion caused about the main contributions and technical novelty of our paper. As we have partially explained in the global response, the key theoretical contributions are mainly twofold. On one hand, technically, to our knowledge, this is the first work which presents a unified convergence analysis with clipping bias description of DP-(L)SGD without assuming bounded gradient (local update) for both convex and non-convex optimization. On the other hand, as an independent contribution beyond privacy-preserving learning, for general convex optimization, to our knowledge, we also provide the first last-iterate convergence analysis without assuming bounded gradient. Due to the page limits, we did not thoroughly compare our techniques with prior works, which is also partially because we use very different methods compared to, say [16, 32], which must count on the assumption of bounded gradient. Hereafter, we briefly outline how we mitigate the need for a bounded gradient assumption in characterizing the clipping bias.
We employ distinct strategies for convex, strongly convex, and non-convex scenarios. The non-convex instance is comparatively straightforward, wherein we present a generic analysis applicable to arbitrary perturbations $Q$ with bounded second-order moment, even if biased. A pivotal step is Equation (54) in Appendix B. By relying on the smoothness assumption, we demonstrate that, in expectation, a sufficiently small step size $\eta$ restricts both the local update drift stemming from data heterogeneity and the cumulative biased perturbation $Q$ across iterations. Importantly, we show the rate to the local minimum neighborhood is $O(1/T)$. The convex case poses a more formidable challenge. To counteract the impact of clipping bias, since local update clipping essentially corresponds to a projection operation, expressible equivalently via a scaled step size, we approach it as if each node (user) employs an individual step size for local updates in LSGD with clipping. However, without assuming global bounds on gradients, we do not have a global lower bound for each step size. Nevertheless, by invoking Janson's inequality alongside the convexity of the utility function, we demonstrate that, once the second moment (or incremental norm) is bounded, clipping LSGD propels advancement toward the global optimum vicinity in an expectation sense. In scenarios of strong convexity, we harness an important property of the gradient descent operator, which is (strictly) contractive in (strongly) convex optimization. This allows us to present more compelling results, ensuring that the divergence introduced by clipping-induced bias will remain bounded. A more comprehensive explanation of these principles will be furnished in a revision.
Concurrently, we regard the more important contribution of our paper as the implications derived from our principal theorems. In addition to demonstrating that DP-LSGD can achieve faster convergence within the same privacy budget in comparison to DP-SGD, we also expound upon the concept of clipping bias. The heightened concentration of local updates and the diminished clipping bias inherent in LSGD pave the way to systematically enhance DP learning. This involves connecting existing techniques from distributed optimization.
2). As for the experiments, as requested, we have added new results on EMNIST, which can be found in the Table 1 of the attachment, where we further test and compare DP-SGD and DP-LSGD on training ResNet20 for EMNIST dataset. EMNIST is an extension of MNIST to the more complicated handwritten letters, and still we assume the training set of totally 125,000 samples as the private data. We consider 6 scenarios of various security parameters where $\epsilon=$ {1.5, 2, 2.5, 3, 3.5, 4} with a fixed $\delta= 10^{-5}$. We optimize the hyperparameter selections of DP-SGD as follows. We search for the optimal step size $\eta \in $ {0.25,0.5,1,1.25,1.5}, the clipping threshold $c \in$ {0.5, 1, 1.5, 2, 2.5, 3}, and the number of iterations(communications/releases) $T \in$ {500, 1000, 1500, 2000} to produce the best test accuracy. Finally, we determine the optimal option as $\eta=0.5$ and $c=1$. Correspondingly, the $T$ for the 6 different $\epsilon=$ {1.5, 2, 2.5, 3, 3.5, 4} is selected as {1000, 1000, 1500, 1500, 1500, 2000}, respectively. As for DP-LSGD, we select $K=10$, $\eta = 0.025$, $c=2$, with the same selection of $T$. We record their performances in Table 1, where in each case we conduct 5 independent trials and report the median test accuracy. As a benchmark, in the non-private scenario without noise, i.e., $\epsilon=\infty$, we achieve a 94.5% test accuracy in 2000 iterations. From Table 1, we can see DP-LSDG still has obvious advantage over DP-SGD in the same setup, even for this relatively simpler learning task.
Please feel free to let us know if you have any other questions. Thanks again.
---
Rebuttal Comment 1.1:
Title: Acknowledging the rebuttal
Comment: I have read the authors' rebuttal, and thank them for responding to the raised concerns. The 2nd point addressed by the authors by conducting experiments on EMNIST is helpful, and it would be useful to see it in a future iteration of the paper. Since I have not been able to verify correctness of the proofs, and the implications derived from principal theorems are an important contribution of the paper, as a result I am decreasing the confidence of my review.
---
Reply to Comment 1.1.1:
Comment: Thank you sincerely for your response and for spending time reading our rebuttals. We are committed to incorporating these supplementary experimental findings into our revised work. Regarding the theoretical part, please do not hesitate to inform us of any ways in which we can facilitate a deeper comprehension of our proposed methodologies for characterizing the clipping bias and the last-iterate convergence. Notably, our approach hinges on the weak assumption of second-moment-bounded local updates.
In this context, we aim to present you with more insights into how DP-LSGD effectively mitigates sampling noise and produces more concentrated local updates, which finally leads to less clipping error. If your schedule permits, please also take a look at Table 2 and 3 enclosed within the accompanying document as well.
Table 2 presents the mean $l_2$-norm of local updates across varying combinations of local steps ($K$) and step sizes ($\eta$) throughout the initial $T=100$ phases of training ResNet 20 on Cifar10 using DP-(L)SGD. One important and intuitive pattern emerged is that: as the number of local gradient descent steps ($K$) increases, the rate of escalation in the $l_2$ norm of local updates decelerates. Said another way, the ratio between the local update norm and the value of $K$ diminishes as $K$ grows. This finding lends support to our point that the inherent sampling noises in local updates, even if they are dependent and evaluated on the same data point, exhibit a propensity to largely nullify each other. When we focus on a fixed $K$ value (within each row), it becomes evident that the local update norm maintains a linear correlation with the step size ($\eta$), aligning with our expectations.
Turning to Table 3, we delve into a comprehensive comparison between the incremental norm bound ($B$) and the corresponding clipping bias bound across various choices of $K$ and $\eta$. Our approach involves a clipping threshold selected as $c = 25 \cdot K \eta$, wherein the value of $c$ scales in relation to $K$ and $\eta$. The rationale behind the selection of the constant $25$ is grounded in our empirical evaluations, which indicate that this particular clipping parameter choice consistently yields optimal results on the CIFAR10 dataset. Within Table 3, we present the $B/c$ ratio, which captures the clipping bias bound described in Theorem 4.1. Evidently, larger values of $K$ tend to yield more concentrated local updates and bolster the efficiency of clipping.
Lastly, if you do not have additional concerns, we sincerely hope that you could consider re-evaluating our work positively and perhaps even contemplate an adjustment to your assessment score. Thank you again. | Rebuttal 1:
Rebuttal: We would like to express our gratitude to all reviewers for their insightful and helpful comments. In this global response, we address the common concerns raised by the reviewers. We begin by describing what we think the three key contributions and technical innovations of this paper are.
The first and foremost contribution is an "explainable" and intuitive theory that characterizes the clipping bias of DP-(L)SGD and why DP-LSGD performs better. We do not merely seek weaker assumptions to produce stronger convergence analysis. Rather, we conduct experiments to compare the statistics of local updates with/out clipping in the first place and try to understand the convergence phenomena of DP-(L)SGD. Subsequently, we formulated a theory to effectively describe these observations. The goal of this paper is not simply to relax the bounded gradient assumption in prior works to the weaker second-moment bounded scenario, though this is technically challenging as explained below. Instead, we try to only use explainable and simulatable terms to describe the convergence and clipping bias. For instance, Definition 4.1 (incremental norm) and Assumption 4.1 are not arbitrarily proposed: DP-LSGD exhibits higher clipping efficiency where the local updates are more concentrated with relatively small incremental norm. This enables better exploitation of the clipping budget, and we show it produces provably reduced clipping bias (Theorems 4.1 and 4.2).
Second, from a technical perspective, this is the first work that characterizes the convergence rate and clipping bias of DP-(L)SGD without assuming globally bounded local updates/gradients. The key challenge in removing this assumption and only assuming a bounded second moment is primarily from that the clipping bias, now, can be unbounded. Indeed, this can be even intractable across iterations if we use existing analysis methods, such as [16,32,33], which must count on a global upper bound. To address this, we develop a tighter analysis on the average case. We show that, with a very careful selection of the step size $\eta$, in expectation, clipped (L)SGD still makes progress towards the minimum. More details are in our reply to Reviewer SLyE. Additionally, for general convex optimization under perturbation, we provide stronger last-iterate convergence (Theorems 3.1 and 4.1) as opposed to the conventional amortized convergence. To our knowledge, this is the first work that does not assume bounded gradient, which could be of independent interest to general last-iterate optimization research, even outside the realm of privacy concerns.
The third aspect lies in the implications and practicality of our results. Our key motivation behind all presented results is to provide useful theory to systematically instruct improvement over DP optimization, especially for deep learning. In particular, we want to point out the essentially similar nature of DP-SGD and federated learning, and many ideas in (non-private) distributed learning can benefit DP-SGD. Clipped DP-SGD, being the most widely-used private optimizer, has undergone extensive study over the past decade, primarily focusing on empirical approaches by optimizing hyper-parameters (e.g., clipping threshold and network architecture). Even state-of-the-art results [36] align with this line. However, the lack of theory to systematically instruct improvement has gradually become the bottleneck for DP deep learning research. In this paper, LSGD, as a natural variance reduction, is the first step in a new direction to use federated learning methods to systematically improve the clipping bias; and as discussed in the paper, once the privacy issues of error feedback/correction methods can be properly addressed, more advanced federated learning acceleration tricks can be used to provably improve DP-SGD. Moreover, for DP practitioners, we want to mention our released code, the first Pytorch platform for DP-LSGD, has competitive running time as that of Opacus, the well-developed DP-SGD simulator. While due to the higher memory consumption, currently we can only run DP-LSGD on medium neural networks (ResNet20) with a relatively small batchsize (1,000) on our machines.
As for the local stochastic gradient, to give a more clear comparison with DP-SGD, we describe the main algorithm in a form where each node uses the full local gradients rather than stochastic one (since in centralized DP-SGD, it is equivalent to that each user only has one datapoint.) But as briefly mentioned in Section 2, the scenario of applying local stochastic gradients can still be easily captured by our results. Please note that in all our theorems, we always take a generic noise $Q$ (see Equation (2) and (3) in the paper) into account. This $Q$ can be clipping error, DP noise, the stochastic gradient error or even compression/quantification error. In particular, for the stochastic gradients, the sampling noises are independent of zero mean and thus can be merged with the DP noise.
As for the comparison with state-of-the-art non-private LSGD results, we apologize for the confusion caused, but we do provide a fair comparison in the same full gradient setup. For example in Scaffold [30, Theorem 1] we first set $\sigma=0$ and compare.
Correspondingly, since we provide the generic analysis for the convergence of biased iterate, captured by the generic $Q$, we are able to handle any privacy setup, including the centralized DP, the local sample-wise DP, and the strict local DP, where the only difference is their different sensitivities and DP noises required. Thus, in this paper, we do not restrict ourselves to any particular privacy setup but show a unified analysis. We will elaborate on this in the individual responses.
Finally, we have added additional experimental results on the EMNIST dataset and about the relationship among the incremental norm $B$, clipping threshold $c$ and the stepsize $\eta$. Details can be found in the attachment.
Pdf: /pdf/a0f5d4c99bb7c0eefb9d29fb57c8f4e4bb149639.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This submission studies the Differentially-Private Local Stochastic Gradient Descent (DP-LSGD), and shows that DP-LSGD with multiple local iterations can produce more concentrated local updates and less clipping bias compared to DP-SGD, assuming that the stochastic gradient is of bounded variance. The main contribution of this submission is to show that DP-LSGD has a faster convergence rate compared to DP-SGDThe authors also add the experiments to show that
DP-LSGD produces a better utility-privacy tradeoff than DP-SGD.
Strengths: 1. This submission develops the connections between the clipping bias and the second moment of local updates, which is something new in the research of differentially private optimization.
2. This submission shows that DP-LSGD can converge faster compared to regular DP-SGD.
3. The experimental results ( the comparison between DP-SGD and DP-LSGD) in this submission look convincing, and this paper is well-written.
Weaknesses: 1.The implementation inefficiency (local update in parallel at a cost of large memory) is a minor issue here.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive assessment for our work. As you mentioned, we believe a more efficient implementation of DP-LSGD would be a promising direction for further work on DP optimization/learning, especially from a system engineering perspective. We present the first step, though at the cost of relatively large memory. However, after careful optimization, our released code has competitive running time compared to Opacus, the well-developed DP-SGD simulator. We will think about further improvement. Please feel free to let us know if you have any other questions. Thanks again.
---
Rebuttal Comment 1.1:
Title: Acknowledging the rebuttal
Comment: I have read the authors' rebuttal. Overall, I think my questions have been addressed.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your reply and we are glad that we have addressed all your concerns. We appreciate your support for our new research direction to connect federated learning techniques and DP learning for systematical improvement. | Summary: The authors provide a unified analysis of the clipping bias and the utility loss in privacy-preserving gradient methods for centralized and distributed setups. The conclusion shows that LSGD behaves as an efficient variance reduction of local update, where multiple local GDs with a small learning rate cancel out substantial sampling noise and enable more efficient clipping compared to DP-SGD.
Strengths: 1. The authors build the connections between the clipping bias and the second moment of local updates. This initializes a new direction to systematically instruct private learning by connecting the research of variance reduction in distributed optimization.
2. The authors conduct analysis on both convex and non-convex ERM problems with a fairly mild assumption of the bounded stochastic gradient variance.
Weaknesses: 1. I understand this is a theoretical paper, but the authors should claim the experimental setup more clearly. It is unclear whether the setting is IID or non-IID. The authors should illustrate the consistency of the empirical support under both IID and non-IID settings, which are the most concern to the FL community.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please check above.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discuss the limitations of their work in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments. We apologize for the confusion caused about our experimental setup. Our experiments focus on the application of DP-SGD and DP-LSGD in the centralized setup, which is essentially equivalent to a federated learning model of $n$ nodes (users), each holding a single distinct data point. Our experimental setup, strictly speaking, should belong to the non-i.i.d. case. As our results only assume local updates of a bounded second moment, theoretically they can be applied to study both i.i.d. and non-i.i.d. scenarios allowing data heterogeneity. Also, as mentioned in the global response, our results also capture both centralized DP and local DP, where the only difference is the different noise scale $\sigma$ required. In addition, we want to mention that our released codes can easily handle a more generic data partition (not necessarily one datapoint for each individual) to simulate a generic distributed optimization setup with automatic noise determination. We will consider adding more experiments about local/user-level DP in a truly federated learning setup in a revision. Please feel free to let us know if you have any other questions. Thanks again. | null | null | null | null |
DISCOVER: Making Vision Networks Interpretable via Competition and Dissection | Accept (poster) | Summary: This paper proposes a Dissection of Competitive Vision Networks (DISCOVER) to build interpretable neural networks. It aims to understand the neuron functionality for image inference. Moreover, this paper introduces the Jensen-Shannon divergence between the probability of concept presence and the probability of neuron activation for network dissection. Experimental results evaluate the effectiveness of the proposed CVNs.
Strengths: 1. Relatively interesting idea of the proposed CVNs for network dissection.
2. high interpretability of the proposed framework evaluated in experiments.
Weaknesses: 1. Limited novelty. DISCOVER method is similar to DMoE [ref-1] combined with the work [ref-2]. Please clarify their difference.
2. Lack of more visualization results to evaluate the interpretability over other methods.
[Ref-1]. Deep mixture of experts via shallow embedding. Uncertainty in artificial intelligence. PMLR, 2020.
[Ref-2] Interpretable Neural Network Decoupling. In ECCV, 2020.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Refer to the Weaknesses.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for kind comments concerning our approach and for pointing out potentially related work to ours. We believe however that the reviewer is a bit harsh concerning the novelty of the proposed framework. We support this claim by highlighting the pivotal differences between our work and the suggested works.
> Limited novelty. DISCOVER method is similar to DMoE [ref-1] combined with the work [ref-2]. Please
clarify their difference.
Both DMoE [ref-1] and the proposed Competitive Vision networks aim to sparsify the networks on a per example basis and do so in a way that renders the networks more expressive. However, the setup of the networks, the rationale, the training procedures and overall the way these two methods achieve this sparsification is radically different. On the one hand, in DMoE, softmax-ed embeddings of the *input image* are obtained using a "shallow" network; these are then passed through a RELU activated multi-headed sparse gating networks to sparsify the results in each layer. On the other hand, CVNs group feature maps into blocks and a competitive random sampling procedure is employed in order to select the winner in each block. This is instantiated by using the response of each feature map to the **layer input** (and not the original image) and by introducing appropriate discrete latent indicators to select the winners that are modeled via solid Bayesian arguments.
Moreover: (i) the "shallow" embedding network that DMoE introduces comprises 4-5 layers; in contrast we do not introduce any additional parameters and use the layer embedding of its input to drive the sparsification mechanism. At the same time the embeddings of DMoE not only arise from the original raw image, but a different weight matrix is introduced for each layer via the gating network that multiply the latent mixture weights, while relying on the ReLU operation to "sparsify" the results; instead, we rely on foundational Bayesian arguments and procedures to learn categorical variables that explicitly indicate unit activation in a competitive manner.
With respect to [ref-2], their method uses a Mutual Information based approach between the inputs and an encoding vector in each layer, in order to discover a unique calculation path for each input. Similarly to the previously discussed work, the only similarity this method bears with our CVN models is the per-example sparsification of the network. We do not consider any mutual information constraints, we do not use any Architecture Controlling Modules, i.e., additional ad-hoc embedding networks to drive the sparsification and more importantly we rely on the concept of local competition between channels, instantiated via a principled formulation via appropriate discrete latent variables and not discretization approaches for training binary (and not categorical) latent variables.
> Lack of more visualization results to evaluate the interpretability over other methods.
As mentioned in the main text, line 305, and our responses to the concerns of other reviewers, since we trained the novel CVN models from scratch, and considering the highly different mode of operation, a direct comparison with other inteprertable methods is not feasible at this point. In the context of this work, and since interpretability in this case is based on the novel CLIP-Dissect framework, most visualization pertain to the standalone Neuron Identification results and the relative to conventional networks quantitative analysis provided in the main text (with some additional qualitative figures in the Supplementary); we nevertheless showed that CVNs improve performance compared to the their conventional counterparts. Since this is the first work that trained CVNs on a large scale, we aim to expand the interpetability investigation to other interpretability-focused methods.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. The responses are well addressed my concerns. I have no more questions. However, I am not familiar to this area. So I have a low confidence to this paper.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging our efforts during the rebuttal period. We will add the discussion for the suggested related work in the final version to highlight the differences. | Summary: This paper proposes to use stochastic local winner-takes-all (LWTA) layers in vision networks combined with CLIP-Dissect to improve the ability of a neuron to capture a concept. A stochastic LWTA layer groups pre-activation neurons into $B$ blocks of $U$ features each and within each block, chooses one feature with probability according to the softmax of the pre-activation neurons in the block. The chosen feature is preserved while the rest are zeroed out.
The paper tests DeiT-T, DeiT-S, and ResNet-18 architectures on ImageNet and Places365 replacing original activation functions with stochastic LWTA. The classification performance slightly improves when each block is of size 2, or $U=2$, and degrades as $U$ increases.
The stochastic nature of LWTA allows the authors to use CLIP-Dissect with the Jensen-Shannon Divergence (JSD), which they do. The learned concepts for the different investigated neurons, such as “green,” “red,” “trains,” and “snowmobiles” match the images those neurons are activated for. The text embedding similarity of found concepts to ground truth ImageNet label is higher with LWTA neurons and JSD than normal neurons and WPMI.
Strengths: The presentation is clear and the motivation in the opening sections is well-expressed. Also, extensive code is provided to support the approach.
The idea of introducing competition between neurons and having one win out seems intuitive and suited for interpretability, the stated goal of this paper.
The results are promising (the introduced module does not harm performance too much, and potentially increases performance if a very mild form of it with $U=2$ is used, and some quantitative evidence for superior interpretability is provided).
Weaknesses: It’s not fully clear whether the approach is being presented as an ante-hoc method or a post-hoc method. It seems one could argue that needing to train a network from scratch with the LWTA nonlinearity makes it an ante-hoc method. It also seems one could argue that since LWTA is a drop-in replacement for any non-linearity, it functions more as an enhancement to any ante-hoc or post-hoc methods applied on top of the architecture rather than an ante-hoc/post-hoc method itself. Could the authors clarify this?
The evaluation could be more thorough and more convincing. Here are some suggestions, not all of which might be necessary:
- Significant decline in performance is stated as a drawback of other ante-hoc methods, which this method is arguably an instance of; is it possible to compare classification performance with one or more baseline ante-hoc approaches?
- Can you provide some sort of comparison between $U$ and the interpretability of the proposed approach? Such as, a plot of $U$ vs. cosine similarity between intermediate neuron description, or comparing concepts found with $U=2$ to $U=16$. Since $U=2$ achieves the best performance, it’s not clear why we would prefer $U=16$. The argument that $U=16$ achieves higher sparsity and thus higher interpretability intuitively makes sense, but is this supported empirically?
- In my opinion, an additional interpretability metric to evaluate the potential of CVNs for interpretability purposes would improve the paper. Would it make sense to use the IOU-based interpretability metric from the Network Dissection paper? If that would require too much additional experimentation, is some sort of more lightweight added experiment/comparison possible?
Miscellaneous typos/comments:
- L177: Doesn’t $\mathrm{CE}(y_i, f(X_i, \hat\xi))$ denote the cross entropy, not $f(X_i, \hat\xi)$
- L340: “the resulting networks yielded on par or even improved performance, even when using up to 4% of active neurons per example” seems a bit of a stretch; going from original network to most sparse competition-based network for DeiT-T on ImageNet-1k, the accuracy dropped from 72.2 to 70.5, for ResNet18 on Places365, from 53.5 to 49.5. I would call these drops significant. I would call the drop of DeiT-S on ImageNet-1k from 77.0 to 76.7 “on par”, but not improved.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I put the most relevant questions in the “Weaknesses” section.
- In L228, it is claimed “In the context of inner-product neurons, we directly obtain the presence probability via Eq. (2).” But aren’t vision transformer neurons also spatially arranged, like convolutional neurons? I.e. for a given feature there are different instances of it which corresponding to a spatial map, corresponding to the different embedded patches. So do you also average over spatial positions for transformer?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations were adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his insightful analysis and positive comments for our work. We will correct and update the phrasing on the noted typos/comments for the camera-ready.
> It’s not fully clear whether the approach is being presented as an ante-hoc method or a post-hoc method.
This constitutes a very interesting point and we'll clarify in the camera ready. Post-hoc interpretability methods are considered for already trained backbone architectures, where a post-hoc analysis method like Neuron Dissection is applied. In the case of LWTA-based networks, there weren't any pretrained models available to apply Network Dissection on. That is why we need it to train them from scratch; after publication, these will be made available for further research investigations. Thus, the proposed framework constitutes an application of a post-hoc interpretability method to CVNs since it is based on the CLIP-Dissect approach that is itself post-hoc; nevertheless it required training since pretrained models were not available.
The LWTA mechanism is easily applicable to any model architecture, however it can't just be used as drop-in replacement to existing ReLU-trained backbones. For example, replacing the activations in a pretrained ReLU network and directly performing inference will lead to performance degradation. This is natural considering the LWTA unique mode of operation. The model needs to be trained with the competition mechanism to learn the different neuronal activation patterns and how the grouped components compete for their output. In this work, we argue and experimentally validate that this mechanism is indeed a viable alternative to ReLU-based activations, that is accompanied by some significant benefits in term of per-example activation sparsity, and subsequently, intepretability.
> Significant decline in performance is stated as a drawback of other ante-hoc methods, which this method is arguably an instance of; is it possible to compare classification performance with one or more baseline ante-hoc approaches?
We believe that the answer to the previous question clarifies that our approach is not an ante-hoc method like Concept Bottleneck Models. We are simply training Vision Networks from scratch with the LWTA competition mechanism instead of the conventional non-linearities. Moreover, the obtained performance is comparable or even improves the classification results compared to the baseline conventional networks in some settings.
> Can you provide some sort of comparison between $U$ and the interpretability of the proposed approach? The argument that achieves higher sparsity and thus higher interpretability intuitively makes sense, but is this supported empirically?
It is true that $U=2$ offers the best-performance accuracy-wise under the same training regime but in the scope of per-example sparsity $U=16$ offers a better alternative. However, this question is related to Question 3 of Rev. BNcZ, where we noted that:
In the context of interpretability, activating, e.g., 1/16 of the original neurons per each layer, surely facilitates the process of neuron specialization: since not all neurons are activated for each example, neurons can specialize to only the features of the examples for which they are active. After Neuron Identification is performed, we can even narrow down the neurons that were activated by a particular test input and try to reason using only these for a downstream task. For example, if we consider a hidden layer with 800 components and U=16 competitors, it is easier to reason and investigate the 50 activated components instead of investigating the effect all components in conventional architectures.
The per-example sparsity will facilitate the examination and potentially correction of the networks decisions. In the context of the used similarity metrics, we obtain similar cosine similarity values for different $U$ values. Our next steps comprise the exploration of the activation patterns and interpretation results in various settings. This nevertheless requires extensive investigations that go beyond the scope of this work.
> In my opinion, an additional interpretability metric to evaluate the potential of CVNs for interpretability purposes would improve the paper. Would it make sense to use the IOU-based interpretability metric from the Network Dissection paper? If that would require too much additional experimentation, is some sort of more lightweight added experiment/comparison possible?
This is an interesting suggestion from the reviewer. However, considering other interpretability metrics as the IOU one proposed in the Network Dissection paper 1) requires a totally different treatment, and (2) defeats the purpose of using the post-hoc CLIP-DISSECT method. In this work, we selected the CLIP-DISSECT method over the conventional NetworkDissection. That is why even in the original CLIP-DISSECT publication there isn't any comparison on a metric basis using the IOU or other interpretability measures proposed in related works.
> In L228, it is claimed "In the context of inner-product neurons, we directly obtain the presence probability via Eq. (2)." But aren’t vision transformer neurons also spatially arranged, like convolutional neurons? I.e. for a given feature there are different instances of it which corresponding to a spatial map, corresponding to the different embedded patches. So do you also average over spatial positions for transformer?
We thank the reviewer for raising this point. It is something that we wanted to clarify in the main text. For the transformer architecture in particular, we alter the MLP block of the transformer architecture and specifically we replace the used GeLU architecture with the LWTA mechanism. Indeed in this case, we average over the spatial positions in order to obtain the probability.
---
Rebuttal Comment 1.1:
Comment: I have read the authors' response and stand by my initial evaluation.
It remains a point of possible improvement that the authors provide a concrete example of how a sparser CVN (CVN trained with higher $U$) provides better interpretability. The intuitive argument makes sense, but a case study would really help drive it home. I am curious what would happen if for a couple images, you listed all of the concepts corresponding to the activated neurons -- this is something the CVN design is particularly suited for as the latent one-hots explicitly declare which neurons are activated. Would CVNs with $U=2$ have a lot more irrelevant concepts than CVNs with $U=16$? That would be interesting to show.
---
Reply to Comment 1.1.1:
Comment: We are glad that through this discussion, we were able to reach an agreement that in this setting the sparse activation argument makes sense. Moreover, the argument that the reviewer is making, i.e., that using the latent one-hot vectors to investigate which neurons are activated for each example (or even find patterns of activations among different images), is exactly one of the most important properties of CVNs. This will be a very important addition to our work. We are currently working on this analysis; we'll do our utmost to provide the results before the end of the discussion period. | Summary: This paper proposes a novel method for interpreting vision networks by combining an architecture that incorporates stochastic local-winner-takes-all activations with the CLIP-DISSECT framework. The author makes use of the probabilistic interpretation of LWTA as a categorical distribution to derive a similarity measure for LWTA activations based on the Jenson Shannon Divergence. They show the benefit of their approach by comparing the discovered interpretations to ground truth labels for conventional and competitive (LWTA) networks.
Strengths: The studied problem of interpretability is very important, the idea is well motivated, the background and related work is informative, and the writing is clear.
The accuracy of the LWTA adapted transformer is very encouraging and suggests the usefulness of competition independently of interpretability.
The improvements in automated neuron description, are encouraging.
Weaknesses:
Clarity could be improved: The figures are not very helpful. Especially fig 1 could be improved by simplifying it (e.g. without showing individual neurons and weights). In my opinion Fig 2 is not necessary and could instead make space for a much needed figure to illustrate the DISCOVER framework.
The math feels a bit convoluted with too many new symbols introduced in each paragraph. Again, simplification could help understandability.
The paper spends quite time explaining how LWTA and DISCOVER can be used for convolutional networks. However, the main results of the evaluation focus on transformer architectures for which it is less clear how LWTA is incorporated (only replacing the ReLU inside the MLP block or also in the attention and residual branch?). I suggest switching focus and making conv architectures much less prominent.
The evaluation feels incomplete. The main evidence for the efficacy of DISCOVER are:
1. the qualitative results in Figure 3, which are only partially convincing. Eg. the "aquaculture" neuron seems to react to a highway, and the "dashboard" neuron mostly picks up on hifi equipment.
2. Table 2, which shows that, compared to non-LWTA-networks) the textual embedding of the discovered neuron descriptions is more similar to the corresponding ground truth label. However the CLIP cosine similarity score is hard to interpret and the reported gains seem mild.
For a paper that focuses on interpretability, I find the results hard to interpret. As it stands, the paper has me convinced that stochastic LWTA is interesting and should be studied further, but fails to convince me of the added value of DISCOVER over e.g. CLIP-dissect.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: How exactly is LWTA incorporated into the transformer?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper does discuss two relevant limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his kind comments on the usefulness of our approach.
> Clarity could be improved: The figures are not very helpful. Especially fig 1 could be improved by simplifying it (e.g. without showing individual neurons and weights). In my opinion Fig 2 is not necessary and could instead make space for a much needed figure to illustrate the DISCOVER framework. The math feels a bit convoluted with too many new symbols introduced in each paragraph. Again, simplification could help understandability. The paper spends quite time explaining how LWTA and DISCOVER can be used for convolutional networks. However, the main results of the evaluation focus on transformer architectures for which it is less clear how LWTA is incorporated (only replacing the ReLU inside the MLP block or also in the attention and residual branch?). I suggest switching focus and making conv architectures much less prominent.
We thank the reviewer for his suggestions. Since LWTA-based networks are quite underexplored, we focused on the most used operations, i.e., inner-product and convolutions. Per the reviewer's suggestion we will simplify the figures and notation in the camera-ready and shift the focus to the Transformer architectures. As the reviewer correctly notes, in this work, we replaced the GeLU nonlinearity in the MLP layer of the transformer: this decision was motivated by recent works that have shown that transformer MLPs encode knowledge attributes [1,2,3]. We aim to explore other modifications in transformer architectures in the near future.
[1] Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. Knowledge neurons in pretrained transformers, ACL, 2022
[2] Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are key-value memories, EMNLP, 2021
[3] Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. Mass editing memory in a transformer, ICLR, 2023
> The qualitative results in Figure 3, which are only partially convincing. Eg. the "aquaculture" neuron seems to react to a highway, and the "dashboard" neuron mostly picks up on hifi equipment.
Please see the response to Q4 of Rev. kXaB:
There are instances where both the conventional networks and the competitive vision networks do not provide the optimal results, e.g., in Fig. 3 Neuron 242; this result was purposely included in the Figure to highlight this fact. This is also case with the original CLIP-DISSECT work: therein, in Fig. 1 Neuron 683, some of the examples that highly activate the neuron have no relationship with the Neuron Identification label. In turn, this is why we considered the quantitative analysis in Table 2 using the metrics introduced in the CLIP-Dissect paper. Using the introduced metrics, it is quite clear that using the LWTA mechanism yields a higher cosine similarity between the intermediate neuron descriptions and the ground truth labels compared to the conventional method that does not employ the LWTA mechanism; this illustrates the improvement in performance. Devising other quantitative methods for comparing conventional and LWTA-based Vision Networks, especially in the context of Network Dissection where direct comparison is rendered difficult, is a substantial research challenge that we aim to nevertheless address in the future.
In our view, it is not an easy task to construct a best-performing method in all settings and tasks. However, we believe that the LWTA mechanism and its unique mode of operation provide substantial benefits for constructing more interpretable networks.
> Table 2, which shows that, compared to non-LWTA-networks) the textual embedding of the discovered neuron descriptions is more similar to the corresponding ground truth label. However the CLIP cosine similarity score is hard to interpret and the reported gains seem mild. For a paper that focuses on interpretability, I find the results hard to interpret. As it stands, the paper has me convinced that stochastic LWTA is interesting and should be studied further, but fails to convince me of the added value of DISCOVER over e.g. CLIP-dissect.
We believe that the previous response answers this question. We want to emphasize that CLIP-Dissect is a post-hoc intepretability method that was applied on top of conventional vision architectures. In this work, we explored the potency of Competitive Vision networks by training such networks from scratch in a variety of settings and applying CLIP-Dissect as a post-hoc method. Thus, in our view, it is not a matter of if DISCOVER is better than CLIP-Dissect (since it uses CLIP-Dissect itself) but if the inherent competition mechanism facilitates improvement in post hoc analysis and interepretability of the results. Thus, and given that we are "limited'" by the CLIP-Dissect framework, we believe that our investigation has provided strong empirical evidence for the efficacy of CVNs compared to conventional architectures in this setting.
> How exactly is LWTA incorporated into the transformer?
We believe the answer in Question 1 of the reviewer answers this question. We focused on altering the GeLU activation in the MLP block of the transformer with the LWTA mechanism.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for the clarifications. I understand now that advertising the usefulness of stochastic LWTA is part of the contributions of this paper. In particular, highlighting the fact that it can help with interpretability without sacrificing performance.
Because of that, I raise my score to borderline.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for taking the time to comment on our clarifications and for raising his score. We will make sure that all the reviewer's suggestions are incorporated in the final version. | Summary: The paper proposes DISCOVER, a novel framework for creating interpretable vision networks. It utilizes stochastic Winner-Takes-All layers and multimodal vision-text models to uncover the specialization of each neuron. The framework achieves high activation sparsity, allowing for direct examination of concept representations. It also introduces a new similarity metric for network dissection based on Jensen Shannon Divergence. Experimental results demonstrate improved classification performance and provide a principled framework for text-based descriptions of neuron representations. The paper pioneers the training of LWTA networks on large datasets, yielding interpretable functionality with a small subset of neurons per datapoint.
Strengths: The strengths of this paper include:
1. Novel Framework for Network Dissection: The paper introduces a novel framework, DISCOVER, for creating interpretable vision networks. It combines stochastic Winner-Takes-All layers and multimodal vision-text models, providing a unique approach to uncovering neuron specialization.
2. Improved Classification Performance: Despite not performing hyperparameter tuning, the DISCOVER framework achieves comparable classification performance.
3. Interpretable Similarity Metric: The paper introduces a new similarity metric based on Jensen Shannon Divergence to match neuron representations to concepts.
4. Pioneering LWTA Networks: The paper contributes to the literature by being one of the first to train large-scale LWTA networks using substantial datasets and architectures.
Weaknesses: 1. The WTA approach in this neural network framework draws inspiration from previous methods employed before the advent of deep learning. For instance, techniques like bag of words, Locality-constrained Linear Coding (LLC) (Wang et al., "Locality-constrained linear coding for image classification," CVPR, 2010) and a long etc, also involved neuron competition for activation.
2. It would be valuable to have a clearer understanding of the level of sparsity in the network without WTA. While the paper mentions that in a network without WTA sparseness is "input specific," providing some statistical analysis or comparisons of the sparseness level with non-WTA networks could provide insights into the necessity and effectiveness of the WTA mechanism.
3. The paper assumes that sparse representations lead to more interpretable units, but the connection between sparsity across neurons and human interpretable concepts needs further clarification. It would be beneficial to elaborate on the theoretical arguments or underlying assumptions supporting this claim. Additionally, providing empirical evidence or previous research that supports the notion that sparse representations indeed enhance interpretability would strengthen the argument.
4. The qualitative results provided in the paper are limited, particularly regarding the comparison between applying the WTA mechanism and not applying it. It would be insightful to include examples of units that are not interpretable when the WTA mechanism is not utilized. This comparison would help illustrate the added interpretability that the WTA approach brings to the network and provide a more comprehensive understanding of its impact on the model's performance.
5. The dependence on the interpretability similarity metric raises questions about the generalizability of the results. While the paper introduces a new similarity metric based on Jensen Shannon Divergence, it would be valuable to investigate the robustness of the findings using alternative metrics and visualization techniques. Demonstrating the consistency of the results across different evaluation methods would strengthen the validity and generalizability of the proposed framework for increased interpretability. Exploring the impact of alternative metrics and visualization techniques could provide a more comprehensive understanding of the interpretability achieved by the network.
By addressing these concerns, the paper can further enhance its contributions by providing more insights into the necessity and effectiveness of the WTA mechanism, establishing theoretical justifications, and expanding the qualitative analysis to better understand the interpretability aspect.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: As I mentioned earlier, there is a concern regarding the lack of clear evidence establishing a direct relationship between sparsity and interpretability. While the paper assumes that sparse representations lead to increased interpretability, it would be beneficial to provide results that demonstrate this relationship across different levels of sparsity, utilizing various interpretability measures. By systematically examining the impact of different sparsity levels on interpretability using diverse evaluation metrics, the paper could provide stronger evidence for the claim that sparsity enhances the interpretability of individual units. This analysis would contribute to a deeper understanding of the relationship between sparsity and interpretability and provide more robust support for the proposed framework.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: I believe the limitation section could take into account the points I raised in weaknesses, as they were not mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his constructive comments on improving the presentation of the strengths of our work.
> Similarity of WTA to Linear Coding (LLC).
The LWTA approach does indeed bear some similarity with Locality-constrained linear coding relating to the grouping of features and selecting one when using hard VQ. However the construction, learning of the features and applicability of the approaches are highly different.
> It would be valuable to have a clearer understanding of the level of sparsity in the network without WTA. While the paper mentions that in a network without WTA sparseness is "input specific," providing some statistical analysis or comparisons of the sparseness level with non-WTA networks could provide insights into the necessity and effectiveness of the WTA mechanism.
This is a very valuable suggestion from the reviewer and will be included in the main text.
We consider the official pretrained conventional DeiT-T checkpoint with the standard GeLU activation and report the sparsity level: this is obtained by averaging the number of activated neurons after the application of the GeLU nonlinearity in the MLP block of DeiT and considering each neuron as active if its absolute value is greater than a small constant, i.e., $10^{-6}$ to account for rounding errors. For this test, we average over all the test examples of the ImageNet-1k validation set in all layers of DeiT. We observed that $98$% of the neurons are active in this case. Using the exact same implementation for our LWTA-based DeiT-tiny implementation with 16 competitors, yields the expected $6.25$% sparsity.
> The paper assumes that sparse representations lead to more interpretable units, but the connection between sparsity across neurons and human interpretable concepts needs further clarification.
We thank the reviewer for raising this point as it will help improve on the phrasing of the main text and avoid potential misunderstandings for our claims about the relation between sparsity and interpretability. For this point, we re-iterate the response to Question 3 of reviewer BNcZ:
In the context of interpretability, activating, e.g., 1/16 of the original neurons per each layer, surely facilitates the process of neuron specialization: since not all neurons are activated for each example, neurons can specialize to only the features of the examples for which they are active. After Neuron Identification is performed, we can even narrow down the neurons that were activated by a particular test input and try to reason using only these for a downstream task. For example, if we consider a hidden layer with 800 components and U=16 competitors, it is easier to reason and investigate the 50 activated components instead of investigating the effect all components in conventional architectures.
We believe that this argument clarifies our sparsity-interpretability claims and resolves the worries of the Reviewer. In the context of our future work, we aim to extensively analyse and evaluate how this data-driven sparsity mechanism can further facilitate interpretability of modern architectures while considering a variety of different ante and post-hoc methods starting from linear probe methods up to and further than Concept Bottleneck Models.
> The qualitative results provided in the paper are limited, particularly regarding the comparison between applying the WTA mechanism and not applying it.
Considering the radically different mode of operation of Competitive Vision Networks and conventional architectures, it is not easy to draw a distinction between the methods on a qualitative basis since there isn't a one-to-one matching. This point is emphasized in the main text in line 305.
There are instances where both the conventional networks and the CVNs do not provide the optimal results, e.g., in Fig. 3 Neuron 242; this result was purposely included in the Figure to highlight this fact. This is also case with the original CLIP-DISSECT work: therein, in Fig. 1 Neuron 683, some of the examples that highly activate the neuron have no relationship with the Neuron Identification label. In turn, this is why we considered the quantitative analysis in Table 2 using the metrics introduced in the CLIP-Dissect paper. Using the introduced metrics, it is quite clear that using the LWTA mechanism yields a higher cosine similarity between the intermediate neuron descriptions and the ground truth labels compared to the conventional method that does not employ the LWTA mechanism; this illustrates the improvement in performance. Devising other quantitative methods for comparing conventional and LWTA-based Vision Networks, especially in the context of Network Dissection where one-to-one direct comparison is rendered difficult, is a substantial research challenge that we aim to nevertheless address in the future.
> The dependence on the interpretability similarity metric raises questions about the generalizability of the results.
The reviewer is concerned about the generalizability of the results. However, as mentioned in the main text (and we will further expand in the camera ready), one of the main ``limitations'' of the proposed framework is the dependence on and the comparison restriction using only the CLIP-DISSECT approach. Concerning the visualization techniques issue that the reviewer raises, we believe this was addressed in the previous question. For the evaluation metrics of the results we considered both the best performing metric that the original publication uses (Soft WPMI), that substantially improved upon their other considered metrics, as well as the introduced Jensen Shannon divergence. We validated the improved performance using the cosine similarity that the original publication uses but apart from that, we believe that considering and introducing even more measures and techniques, goes beyond the rational scope of this work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. After reading the rebuttal, I am more convinced that the assumption that sparser representations are more interpretable is not correct. We can not make such assumption, maybe neurons activate for few images but there is not a clear, human interpretable pattern in the set of image that helps humans interpret the activity. So, I would like to keep my score as this assumption is at the core of this work.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for his response, allowing us to further add to this point.
The underlying argument that sparsity is intuitively related to interpretability is also noted by reviewer CPpE "The argument that achieves higher sparsity and thus higher interpretability intuitively makes sense, but is this supported empirically?".
Sparsity and Interpretability have recently gained a lot of attention with recent works aiming to create more interpretable layers via sparsification in the context of Concept Bottleneck Models in particular [1,2]. Even though these operate on the basis of a sparse linear final layer, the premise is the same: "A sparse linear model allows us to reason about the network’s decisions in terms of a significantly **smaller set of deep features**" [1].
In conventional ReLU/GeLU-based vision networks, **ALL** neurons are activated for **ALL** examples, and neuron identification and intepretability are examined and analysed on this basis. In this context, we posit that activating only *a subset of neurons for each example* in the context of CVNs, facilitates the examination of the activated neurons (which are tied to particular concepts via Neuron Identification) and allows for reasoning on **that specific subset** in a similar manner to the seminal work of [1].
[1] Eric Wong, Shibani Santurkar, and Aleksander Madry, Leveraging Sparse Linear Layers for Debuggable Deep Networks, ICML 2021
[2] Tuomas Oikarinen, Subhro Das, Lam M. Nguyen, Tsui-Wei Weng, Label-Free Concept Bottleneck Models, ICLR 2023 | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The work proposes a interpreatable network that uses concepts from competition networks specifically there are for a specific output multple copies of the neuron and the neurons compete to provide the winning activation for that output. The method proposes MLP and convolutional versions of their network. The work also provides evidence of interpreatability by using CLIP-DISSECT and shows that their method is more interpreatable while using same number of parameters as DiET.
Strengths: 1. The method is quite simple and elegant in its motivation
2. Existing modules can be swapped with the proposed block without any changes
Weaknesses: 1. The method is essentially a stochastic version of maxpooling where the deterministic version is exactly maxpooling. Is there an ablation to show how the stochastic version is more appropriate than a deterministic one?
2. Having 2 competetors seems to provide the best performance whereas using more decreases the performance of the network. Is this because of using less "effective neurons" (as the model size kept same) or with more competition the network inherently starts degrading? If it's the second case than it does not put competition networks in a good light.
3. It seems wasteful that with more competition exponentially lower number of neurons in the model are activated.
5. Can it be used in networks for other tasks? Segmentation, object detection etc.?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness.
1. Gumbel softmax generally requires a bit of finetuning. How stable is the gumbel softmax training on different datasets?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his insightful analysis and crucial comments and questions.
> 1. The method is essentially a stochastic version of maxpooling where the deterministic version is exactly maxpooling. Is there an ablation to show how the stochastic version is more appropriate than a deterministic one?
There is one pivotal difference between maxpooling and the LWTA mechanism: in the LWTA context we do not just output the max value but instead, and since we do not perform any reduction operation, we retain the positional information of both the active and inactive neurons, yielding *sparse* representations for each layer; this in turn allows for highly different patterns for each example, facilitating learning through specialization. See also the discussion in [1].
We did not perform an ablation study to assess the relative performance between the deterministic and stochastic variants considering the significant computational complexity. However, several prior works (line 137, main text) have consistently exhibited that the stochastic LWTA formulation outperforms its deterministic counterpart in various settings. Moreover, only the stochastic variant allows for introducing an additional metric for matching neurons to labels, i.e., the proposed Jensen-Shannon divergence.
> 2. Having 2 competitors seems to provide the best performance whereas using more decreases the performance of the network. Is this because of using less "effective neurons" (as the model size kept same) or with more competition the network inherently starts degrading? If it's the second case than it does not put competition networks in a good light.
We thank the reviewer for this question, as it will help clarifying some key points. The goal of this work was not to achieve state-of-the-art performance for Vision tasks (main text, line 283), but instead provide empirical evidence towards a viable and radically different LWTA-based mode of operation for both Vision Networks and interpetability overall. To this end, we retained the training procedures for conventional ReLU/GeLU-based networks that have been rigorously optimized. We strongly believe that the observed performance difference is not a matter of using ``less effective'' neurons, but that that these procedures are highly sub-optimal for the mode of operation of LWTA; thus, extensive ablation studies and optimization will alleviate this issue (if one aims for classification accuracy as was the case with the baselines). However, this is not feasible in the scope of a single publication.
Moreover, prior works in pruning and sparsity mechanisms have provided substantial empirical evidence of the significant redundancy of components in modern architectures. This fact further supports our claim that is not a problem of using less effective neurons but rather that CVNs have yet to reach their full potential. This work constitutes a first, and in our view very attentive, investigation of the behavior of CVNs on large scale.
> 3. It seems wasteful that with more competition exponentially lower number of neurons in the model are activated.
Similarly to the previous points, using a lower number of neurons may seem wasteful from a first glance but it also comes with significant benefits in various settings from meta learning to adversarial robustness (see also [1,2]). In the context of interpretability, activating, e.g., $1/16$ of the original neurons per each layer, potentially facilitates the process of neuron specialization: since not all neurons are activated for each example, neurons can specialize to only the features of the examples for which they are active. After Neuron Identification is performed, we can even narrow down the neurons that were activated by a particular test input and try to reason using only these for a downstream task. For example, if we consider a hidden layer with 800 components and $U=16$ competitors, it is easier to reason and investigate the 50 activated components instead of investigating the effect all components in conventional architectures. This can be applied in a variety of settings such as Concept Bottleneck Models and other approaches that we aim to explore in the future.
> 4. Can it be used in networks for other tasks? Segmentation, object detection etc.?
WTA-based activations can in principle be used in any kind of network replacing any of the conventional nonlinearities, in any kind of setting and task. For example, it has recently been explored in the context of end-to-end sign language translation [3]. Intepretability in the context of classification is a significant research and societal challenge, thus we focused on this setting. We hope that this work can facilitate the application of this underexplored but important mechanism to every ML task.
> Q1: Gumbel softmax generally requires a bit of finetuning. How stable is the gumbel softmax training on different datasets?
For training all our networks we set the temperature of the Gumbel-Softmax relaxation to a fixed $\tau = 0.67$ value as suggested in [4]. We found this choice to be extremely stable without requiring any additional finetuning. It is however possible that finetuning this value may provide further improvements accuracy-wise, especially when altering the number of competitors (essentially the number of categories of the distribution). We aim to explore training improvements in the near future.
[1] Compete to Compute, R. Srivastava, J. Masci, S. Kazerounian, F. Gomez, J. Schmidhuber, NIPS, 2013
[2] Konstantinos Kalais and Sotirios Chatzis. Stochastic deep networks with linear competing units for model-agnostic meta-learning, ICML, 2022
[3] Stochastic Transformer Networks with Linear Competing Units: Application to end-to-end SL Translation, Voskou et al., ICCV, 2021
[4] Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. In Proc. ICLR, 2017.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for extensive comments on my review. The review adresseses many of my concerns.
> 1. The method is essentially a stochastic version of maxpooling where the deterministic version is exactly maxpooling. Is there an ablation to show how the stochastic version is more appropriate than a deterministic one?
Answer is mostly convincing. Thank you for additionally pointing to citing [1]. Argument against is that reduction can be easily undone for maxpooling, (e.g. done in backpropagation).
> 2. Having 2 competitors seems to provide the best performance whereas using more decreases the performance of the network. Is this because of using less "effective neurons" (as the model size kept same) or with more competition the network inherently starts degrading? If it's the second case than it does not put competition networks in a good light.
Answer is mostly convincing. Although if current setup is suboptimal it puts the work at a disadvantage. The work is skipping a step in the discovery process. There should exist a well optimized LWTA based network to start with, before performing any kind of analysis on their advantages.
> 3. It seems wasteful that with more competition exponentially lower number of neurons in the model are activated.
Answer is mostly convincing. I agree with reviewer kXaB that lower number of neurons may not necessarily mean better interpreatability. There are still 50 neurons interacting with each other in the next step. There is no guarantee that each neuron is going to be specialized.
> 4. Can it be used in networks for other tasks? Segmentation, object detection etc.
The answer is satisfactory.
> Q1: Gumbel softmax generally requires a bit of finetuning. How stable is the gumbel softmax training on different datasets?
The answer is satisfactory.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the comments on our response.
> Although if current setup is suboptimal it puts the work at a disadvantage.
We agree with the reviewer that it would be optimal to consider the best-optimized LWTA network. However, this process requires a very significant amount of computational resources and computational time in general. Nevertheless, in this work, we trained LWTA-based networks that reach or perform better than their ReLU/GeLU based counterparts, in which many post-hoc interpretability methods have already been applied. This is the first time that something like this has been achieved and we hope that this works further advances investigations of CVNs.
> There is no guarantee that each neuron is going to be specialized.
We agree with the reviewer on this point and we are not claiming or providing any theoretical or empirical guarantees about ideal specialization or sparsification. What we posit though is that the lower number of neurons to examine can indeed facilitate the investigation of the networks decision process.
After training a specific backbone network, we apply the post-hoc CLIP-Dissect method. In this context, each neuron is tied to a specific concept according to how these neurons are activated when presented with the probe dataset. This is true for both the conventional and the LWTA-based networks.
However, in conventional networks most neurons are active (see response to reviewer kXaB); specifically, we observed that on average, 98% on neurons are active for each example (averaged over the validation set of ImageNet-1k). Thus, even neurons that are tied to very different concepts are very likely active: the neuron tied to the color yellow, as neuron 50 in Fig. 3, is probably active for an image as the manatee that activates neuron 700 that is tied to aquaculture. This also applies to a potential activation of neuron 242 and neuron 46 from the same figure, *since on average 98% of neurons are active per example*.
In contrast, and in the context of the considered framework, we argue that having the sparse mode of activation of LWTA can reduce the number of neurons that activate for each example (with 16 competitors, 6.25% active neurons); neurons that we can examine to which concepts they are tied to (since we have already applied clip-dissect) and try to investigate and infer the process on a substantially smaller subset of neurons. | null | null | null | null | null | null |
Ordering-based Conditions for Global Convergence of Policy Gradient Methods | Accept (oral) | Summary: The paper theoretically proves that, for finite-arm bandits with linear function approximation, the global convergence of policy gradient (PG) methods is not dependent on approximation error, but rather, on the ordering properties of the reward representation. The global convergence is achievable for both standard Softmax PG and natural policy gradient (NPG) under linear function approximation. This result is also verified using simple examples and empirical experiments.
Strengths: This paper provides a completely new understanding on the convergence of policy gradient method, that is, the global convergence only depends on the specific ordering-based condtions instead of some classical approximation requirements. To my own knowledge, I have not seen such result before.
This novel insight is also significant. There are many situations where the approximation error could never be sufficiently small. This work provides a key to understand the covnergence of PG-based algorithms. And as the author claims, I agree with that this paper will open a new directions for PG-based methods under function approximation.
This work is well-written. I can clearly understand which problem this paper is solving; especially, examples given in Section 3 are very helpful to understand the motivation of proposing the ordering-based conditions. This paper also has rigorously defined two new preservation condtions on the ordering, and proved the necessarity or sufficiency for those proposed conditions.
Weaknesses: The studied case only contains one state, so it is a simple bandit optimization problem. Though the main result of this paper is inspiring and interesting, this paper lacks of (1) persuading evidence showing that these understandings will also be valid for more general cases, and (2) sufficient discussions on how to generalize this result in a more general Markov decision process.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The single-state MDP is a very special MDP; the proposed idea may or may not work for a more general senario. How this work will be generalized to a multi-state MDP? Is there any simulation results on that?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: This is a theoretical work so there is no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer recognized the contribution of the work. We answer the questions as follows.
>**discussion of how to generalize the results to MDPs**
Generalizing the results to MDPs is an important and challenging next step, as we mentioned in the conclusions. Since other reviewers (P4aQ, m2vF) also asked this question, we present the discussions for MDPs in the common feedback, and please check that for details. Thank you.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the detailed explanation regarding the generalization to MDPs. It further confirms my belief that this paper has made a significant contribution and could potentially lead the way for a new direction. I have read comments from other reviewers, and none raised any new concerns for me. So I will keep my current positive scores. | Summary: This work studies the global convergence of standard softmax policy gradient and natural policy gradient for finite-arm bandits with linear function approximation (i.e., considering a log-linear policy). It is shown that the approximation error is not crucial for characterizing global convergence since the latter can be achieved even with non-zero approximation error. Ordering-based conditions are provided instead to guarantee the global convergence of softmax PG and natural PG while the approximation error is non-zero. More precisely, this paper establishes that both NPG and softmax PG converge globally when (a) for NPG, the optimal action’s rank is preserved by the projection of the true reward onto the space of representable rewards and (b) for softmax PG, there exists a linear function preserving the ranking of actions provided by the true reward function. The case of a linearly realizable reward function is a particular case of the aforementioned reward rank preservation condition. Numerical examples illustrate all these results throughout the paper.
Strengths:
The main results of this paper provide very interesting insights on the global convergence of some PG methods under the function approximation setting for finite-arm bandits. The paper provides strong and solid contributions.
**(a) Originality**: the paper challenges common existing analysis featuring the approximation error as a natural limit when considering function approximation. Results are new to the best of my knowledge.
**(b) Significance**: as also highlighted by the paper in the conclusion, the results are likely to inspire further important developments for function approximation in the more general setting of MDPs, nonlinear approximation and representation learning.
**(c) Correctness**: To the best of my knowledge, except for the proof of Theorem 1 which is more involved and may deserve some clarifications in my opinion (see my comments below), the proofs in the appendix are correct, complete and relatively easy to follow. I checked the appendix in details.
**(d) Clarity and writing**: This paper is very well-written, the structure is clear, the exposition is progressive, the story line is very nice and the illustrations are insightful. This is a solid body of work.
Weaknesses: - While the proof of Theorem 2 is very clear to me, some parts of the proof of Theorem 1 (and its sketch l. 266 to 293) deserve some clarifications in my opinion. Please see my comments in the questions section.
- Comment: The proof of Theorem 2 seems to follow similar lines to the proofs in Khodadadian et al. 2022 even if the latter work does not consider linear function approximation nor does it propose ordering-based conditions (while it applies however to MDPs). The main difference seems in replacing the true gap $\Delta$ by the gap induced by the linear approximation $\hat{\Delta}$. For similarities, see for e.g., Appendix C in Khodadadian et al. 2022 for the bandits case or the main part of the paper (Theorem 1, Lemma 1, Proposition 1 p. 3-4 and their proofs).
Khodadadian, S., Jhunjhunwala, P. R., Varma, S. M., & Maguluri, S. T. (2022). On linear and super-linear convergence of Natural Policy Gradient algorithm. Systems & Control Letters, 164, 105214.
**Minor:**
- It is not very clear what is $v_2$ in l. 278-279 (especially that there is also $v$ in l. 278) and what is $v_1$ in l. 283 and how do they relate to $v$ in l. 268, I find the notations and the formulation a bit confusing in l. 266-293.
- Please give the precise reference to Theorem 1 when you reference [8] in lines 116 and 123.
- $r(a^*)$ introduced in l. 110 does not seem to be defined before (definition of $a^*$ comes later in l. 140).
- Numerical legends in all the figures are almost invisible in a printed format (although visible when zooming on a computer), it would be nice to increase the size of the numerical characters (and save the figure in pdf format if not already done) to ease the reading.
- Eq. (27): Why is the inequality strict? This (strict) positivity does not seem to be used anyway.
- Suggestion: Eq. (102) maybe add $>0$ for clarification.
- Suggestion: Eq. (149) to Eq. (150) maybe add the splitting sum steps with the order summing exchange to detail a bit more for the convenience of the reader.
**Typos:**
- l. 446: $\theta’ \in \mathbb{R}^d$ instead of $\mathbb{R}^K$?
- l. 474: $a \in [K]$ instead of $i \in [a]$.
- l. 481: Eq. (58) instead of (54).
- l. 493: Eq. (68) instead of (65).
- Eq. (97): Eq. (94) instead of (91).
- Eq. (101): Eq. (97) instead of (95).
- Eqs. (138) and (139): $\max_a$ instead of $\max_i$.
- l. 539, Eq. (144): say there exists $\theta_{\zeta} \in [\theta, \theta']$ such that ...
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: About the proof of Theorem 1, I have the following technical questions (1. (a)-(b)) which constitute my main concerns and that I would like to be addressed for clarification before updating my evaluation:
**1. (a)** Second part/Lemma 1 (l. 455): Why does the fact $\|\| \frac{d \pi_{\theta_t}^T r}{d \theta_t}\| \|_2 \to 0$ imply that $\|\|\theta_t\|\| \to \infty$? I understand that there is no stationary point for any $\theta \in \mathbb{R}^d$ following the proof but how do you conclude properly that then $\|\|\theta_t\|\| \to \infty$? In particular in the proof, l. 446: what does the assumption
suppose there exists $\theta' \in \mathbb{R}^K$ with $\|\|\theta’\|\|_2 < \infty$ mean?
It is proven (by contradiction) that for every $\theta’ \in \mathbb{R}^d$, we have $\frac{d \pi_{\theta’}^T r}{d \theta’} \neq 0$.
To conclude that $\|\|\theta_t\|\| \to \infty$, I would expect that the proof assumes that the sequence $(\theta_t)$ is bounded and then finds a contradiction. Could you please clarify the reasoning here?
**1. (b)** Third part: the general reasoning and the conclusion l. 498 to 501 are not very clear to me.
(i) It is proven by contradiction that every suboptimal action $i$ s.t. (47) is satisfied, $\pi_{\theta_t}(i)$ does not converge to 1. How does this help concluding that for the optimal action $a$ (or any optimal action if non unique) we have $\pi_{\theta_t}(a)$ converges to 1? Could you clarify the reasoning and the guiding intuition?
(ii) Could you also elaborate more on how (60), (69), (71) and (72) are used to conclude?
(iii) Is the third case $j \in \mathcal{A}(i)$ (i.e., $i,j$ are mapped to the same reward value) handled in the proof? If trivial, maybe a word about it can be added.
(iv) Minor: what do you mean by $X$ and $u$ are bounded in l. 492? $X$ is the feature matrix and $u$ is a fixed vector in $\mathbb{R}^d$.
**2.** Could you clarify why the possibility of having exponentially many suboptimal local maxima renders global convergence (to the optimal reward) impossible without further structure on function approximation as stated in l. 123-124? Is it because there is more chance to be stuck at a local suboptimal local maxima? How does the ordering-based condition preclude such a case in Theorem 1?
**3.** Do the examples of section 3 allude to the fact that the ordering of the features matters since the examples only differ between them by column permutations?
**4.** In proposition 1 (and more generally in the paper), do you assume that there exists a unique optimal action? If there are many, I guess any optimal action would work. You may want to add a comment regarding this.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations and opportunities for future work are briefly discussed in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for carefully reading and checking the results. The main concerns are addressed as follows.
>**Comment on Khodadadian et al. 2022**
Thank you for pointing this out. We will add a remark to mention the similarities and differences between Theorem 2 and Khodadadian et al. 2022.
>**1. (a)**
Suppose there exists $C < \infty$, s.t. for all $t \ge 1$, $\theta_t \in S_C := \\{ \theta \in \mathbb{R}^d: \\| \theta \\|_2 \le C \\}$.
For all $\theta \in S_C$, $\Big\\| \frac{d \pi\_{\theta}^\top r}{d \theta} \Big\\|_2 > 0$ (no finite stationary points). Since $S_C$ is compact, we have, $\inf\_{\theta \in S_C}{ \Big\\| \frac{d \pi\_{\theta}^\top r}{d \theta} \Big\\|_2 } \ge \epsilon$ for some $\epsilon > 0$.
Therefore, for all $t \ge 1$, $\Big\\| \frac{d \pi\_{\theta_t}^\top r}{d \theta_t} \Big\\|_2 \ge \epsilon > 0$, contradicting Eq. (28).
>**1. (b)**
(i) The whole $\theta_t \in \mathbb{R}^d$ space is partitioned into at most $K$ sub-regions (Figure 2). Since the existence of $w \in \mathbb{R}^d$, $a^*$ has one sub-region (yellow in Figure 2) which satisfies (by order-preservation),
\begin{equation*}
\[ X \theta_t \](a^*) = \max\_{a \in [K]} \[ X \theta_t \](a)
\end{equation*}
Since $\\| \theta_t \\|_2 \to \infty$, if **$\theta_t$ always stays in the above region as $t \to \infty$**, then softmax transform gives $\pi\_{\theta_t}(a^*) \to 1$. We proved that for all $i \in [K]$ with $r(i) < r(a^*)$, $\theta_t$ cannot stay in sub-optimal regions forever.
In addition, $\theta_t$ cannot switch between different regions. As $\\| \theta \\|_2 \to \infty$, $\pi\_{\theta}^\top r \approx r(i)$ in a region, and switching regions contradicts $\pi\_{\theta\_{t+1}}^\top r \ge \pi\_{\theta_t}^\top r $ in Eq. (10). Moreover, $\theta_t$ cannot approach region boundaries, since it makes $\pi\_{\theta_t}$ approach non one-hot policies, such as $(0, 0.5, 0.5, 0)$ in Figure 2, which gives non-zero inner product in Eq. (33), contradicting Eq. (28).
**In summary**, $\\| \theta \\|_2 \to \infty$ and $\theta_t$ can only stay in one region, and we proved that $\theta_t$ cannot stay in any sub-optimal regions, which implies that $\theta_t$ eventually stays in the optimal region, indicating that $\pi\_{\theta_t}(a^*) \to 1$.
(ii) Eqs. (69) and (71) give $w^\top \theta_t \gg 0 \ge u^\top \theta_t$ (the former is unbounded and the later is decreasing), which holds if scaling $w$ and $u$ by constants to get $w'$ and $u'$. For a large enough $t \ge 1$, there exists $w'$ such that $x_{a^*}^\top \theta_t \ge {w'}^\top \theta_t > {u'}^\top \theta_t \ge x_{a^-}^\top \theta_t$ (geometry argument, $u$ can be arbitrarily close to region boundaries). The direction of $\theta_t$ approaches $v$ implies $x_{a^*}^\top v > x_{a^-}^\top v$, meaning that $\theta_t$ enters the region in Case 1, contradicting the assumption of Case 2.
For example, take $u \propto (-0.9, 1)^\top$ and $w \propto (-0.9, -1)^\top$ in Example 1 (Figure 2), $w^\top \theta_t > u^\top \theta_t$ implies $\theta_t(2) < 0$, entering the dark green region (reducing to Case 1). Another example is if $\\| x_a \\|_2 = 1$ for all $a$ (features are on unit ball), then $u' = x\_{a^-}$ satisfies the inequalities.
(iii) If $j \in \mathcal{A}(i)$, then consider the third largest component and discuss the two cases again. If $j \in \mathcal{A}(i)$ for all components, then $\pi_{\theta_t}$ approaches uniform policy $\text{softmax}(X v) = \text{softmax}(\mathbf{1})$, contradicting $\theta = \mathbf{1}$ cannot be a stationary point (unless $ r = \mathbf{1}$ according to Eq. (33), which is trivial since every policy is optimal).
(iv) This means there exists $C < \infty$, such that $\max_{i \in [K], j \in [d]} | X_{i,j} | \le C$ and $\max_{j \in [d]} | u(j) | < C$. The result of this is for $y$ in Line 492, $\max_{i \in [K]} | y(i) | $ is also bounded since $y := X u$ in Line 489.
>**2.**
In [1, Thm 1], if $\theta_1$ is close enough (in a basin of attraction) to one bad local maximum, then gradient ascent will make $\theta_t$ approach that bad local maximum.
The ordering-based condition preclude such a case since **first**, Lemma 1 shows that there is no stationary point in any finite region. **Second**, the landscape is "ordered" such that any sub-optimal one-hot policy is not a bad local maximum (rigorously speaking they are not even stationary points by being infinitely far). They are "saddle-point-like", in the sense of being surrounded by a higher and a lower plateau. This creates a situation that any sub-optimal one-hot is not attracting gradient trajectories, and the optimal plateau is the only "local-maximum-like stationary point" in Figure 1(a).
>**3.**
This is partially correct. If we change the numbers in Example 1 and use the same Examples 2 and 3, the same conclusion still holds, e.g., use $r = (9.2, 7.8, 7.1, 6.3)^\top$ and $X^\top = \[ -0.2, -1.1, 0.1, 2.1 ; -1.9, 0.2, 0.8, 0.1 \]$ as Example 1.
>**4.**
Having multiple optimal actions does not change the conclusion of $\pi_{\theta_t}^\top r \to r(a^*)$. The difference is that the behaviour of optimal actions' probabilities needs to be discussed in a case by case manner, depending on whether different optimal actions' plateaus are connected or separated on landscape.
Consider if $a_1^*$ and $a_2^*$ with $r(a_1^*) = r(a_2^*)$. If the plateaus for those two actions are separated on the landscape in Figure 1 (this is an intuitive statement), then we will have either $\pi_{\theta_t}(a_1^*) \to 1$ or $\pi_{\theta_t}(a_2^*) \to 1$ depending on where the initialization $\theta_1$. Otherwise, If the plateaus for those two actions are connected on the landscape (neighborhoods of one another), then we will have $\pi_{\theta_t}(a_1^*) + \pi_{\theta_t}(a_2^*) \to 1$, since the current analysis only gives $\theta_t$ approach the "joint" optimal plateau formed by $a_1^*$ and $a_2^*$.
>**Typos and Minor comments**
Thank you for catching typos and minor issues. We will ensure these are fixed.
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: I thank the authors for their precise and satisfactory answers to my questions and concerns. I believe this is an interesting and solid paper which provides nice and novel insights about the convergence of PG methods in the linear function approximation setting and which has some potential to be extended to the more challenging MDP setting (despite important obstacles), the discussion provided by the authors as a general comment (in the rebuttal) regarding this last point is insightful. I raised my score to 8. Here are some minor additional follow-up comments below.
- I read the other reviews and the authors’ responses and I think the discussion regarding the verification of ordering-based conditions following the other reviewers’ comments could be added to the paper.
- 1.b I think the proof in the appendix can be revised to reflect the intuition provided in the rebuttal as a response to 1. b. (i) in particular. I find the overall writing of the third part of the proof rather confusing although the treatment of each of the subcases of the partition is rather clear. I refer in particular to the beginning: Why considering the case where the limit of normalized $\theta_t$ sequence converges to a constant vector in (47)? It is not very clearly discussed in the appendix what would happen if the normalized sequence does not converge to a fixed vector but rather rotates indefinitely (spiralling towards $+\infty$). This point was clearer to me after the rebuttal given the responses to 1. b (i) and (ii).
- 3. A comment added to the paper regarding this would probably be useful, I guess ordering of the features should not be crucial in examples since feature matrices would induce the same linear space. I think it is relevant to show that examples are not only based on column permutations.
- 4. Thank you for the clarification, please update the paper accordingly to clarify this case where there are multiple optimal actions since it is not commented on this. | Summary: This paper challenges (arguably) the current best known policy gradient (PG) convergence analysis, which is the conventional approximation error based analysis originally proposed by the seminal work of Agarwal et al. (2021). To this end, the authors consider the finite-arm bandits with log-linear policy and study the conditions of the global convergence of PG and natural policy gradient (NPG).
**First**, by carefully designing numerical simulations, the authors show that global convergence can be achieved even if the parameterized policy space can not cover the full policy space, or the approximation error is not zero. Consequently, the approximation error is not a key quantity for characterizing global convergence in either algorithm under linear function approximation.
**Second**, the authors establish new conditions of the global convergence of PG and NPG for the same setting, separately. For NPG, the necessary and sufficient condition of the global convergence is whether the projection of the reward vector onto the feature map strictly preserves the top ranking of the optimal action. For PG, the sufficient but not necessary condition of the global convergence is whether there exists a point in the image of the feature map such that it preserves the entire ranking of the reward vector. These conditions are again well supported by numerical simulations.
Agarwal, Alekh, Sham M. Kakade, Jason D. Lee, and Gaurav Mahajan (2021). On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift. Journal of Machine Learning Research 22.98, pp. 1–76.
Strengths: The paper is revolutionary and the results are surprising. This paper may have the same impact on the theoretical RL community as that of Agarwal et al. (2021). The strengths of the paper can be summarized as follows.
- First, the research question itself on the approximation error assumption is important and revolutionary. Challenging the previous pioneering work is always not easy.
- However, the argumentation of the paper is impeccable. Readers will probably be surprised at first, but will quickly be convinced by several simple but sophisticated numerical simulations.
- Not only the authors are able to find negative answers to questions about the role of approximation error for the global convergence of PG methods, they also establish new results characterizing the conditions for the global convergence of PG methods and draw the connection between the new conditions and the approximation error assumption. The novelty of the paper is significant.
- In addition, the paper is very well written. The research question is well formulated. The new conditions are well presented. And the reasoning is detailed with intuitive explanations, figures, many examples and the proof sketch.
I agree with the authors that this work will open many new directions for understanding PG-based methods in the function approximation regime, especially considering the general Markov decision processes (MDPs).
Weaknesses: Although the paper is well written, I still find some minor points for improvement.
- In the figures, the authors can specify that y-axis is the reward / value function.
- Line 52-53: "... approximation error ..., diverting attention from feature designs that achieve useful properties beyond small approximation error." Although one goal of the paper is to claim that approximation error is not necessary, I find this sentence to be a little too dismissive of approximation error. It may leave the reader with the impression that approximation error is rarely useful outside of the tabular case where the approximation error is zero. It is true that, zero approximation error does not fit well with linear function approximation in general, which is the original motivation for the paper. However, when the approximation error is zero, the conventional approximation error based analysis becomes useful. For instance, another interesting case of zero approximation error is the use of neural networks, recently studied by Alfano et al. (2023) in Section 4.2. That is, a sufficiently wide and shallow ReLU network can infinitely approximate the Q-function such that the approximation error is zero. Consequently, their approximation error based analysis leads to a new SOTA sample complexity of PG method under neural network parameterization.
- Line 361-362: "Extending the results and techniques to general Markov decision processes (MDPs) is another important and challenging next step." The authors can use an extra page in the revised version to discuss the intuition of possible obstacles to such an extension. See also my question below.
Alfano, Carlo, Rui Yuan, and Patrick Rebeschini (2023). A Novel Framework for Policy Mirror Descent with General Parametrization and Linear Convergence.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: First, I have a clarification question. The authors claim that they solve an open problem left by Agarwal et al. (2021). If I understand correctly, the authors refer to this one: _if zero approximation error holds, does softmax PG achieve global convergence ?_ However, I couldn't find this claim in Agarwal et al. (2021). Instead, I found two open problems left by Agarwal et al. (2021).
- __(1)__ In their Remark 11 (journal version) / Remark 5.1 (arxiv version), the first open problem is that, whether or not softmax PG will converge globally if the initial state distribution is not state-wise strictly positive. Since this paper only considers bandit problems with one single state, the initial state distribution is constant 1 with that state. The paper does not resolve this open problem.
- __(2)__ In their Remark 14 (journal version / Remark 5.2 (arxiv version), the second open problem is whether a polynomial global convergence rate is achievable for softmax PG with the entropy regularizer. This problem envolves entropy regularization, which is outside the scope of the paper. Thus, the paper dose not resolve this problem neither.
Can the authors point me specifically to where Agarwal et al. (2021) claim the open problem mentioned right after Corollary 1 in this paper ?
Next, I have some more open questions that the authors can decide to include them in the paper or not.
- Do the results of softmax PG and NPG in this paper still hold if the optimal action is not unique ?
- Same question for entropy regularized bandit case ?
- Do the results for NPG with non-adaptive geometrically increasing step size still hold ? Note that NPG and variants with non-adaptive geometrically increasing step size have been studied extensively recently (Lan, 2022; Xiao, 2022; Li et al., 2022; Yuan et al., 2023; Alfano et al., 2023). Can we obtain superlinear convergence as discussed in Xiao (Section 4.3, 2022) and Li et al. (2022) ?
- As for the general MDP, one can think about the compatible function approximation framework of Agarwal et al. (2021). By similarity, the ranking condition of the reward vector for the softmax PG becomes the ranking condition of the advantage function / Q-function. And the projection of the reward vector onto the image of the feature map for NPG becomes the projection of the advantage function / Q-function onto the image of the matrix $[\nabla_\theta \log \pi_{\theta}(a \mid s)]_{s, a}$. What do you think about this idea ? Can this idea and the techniques used in this paper be enough to be extended to general MDP ? If not, what is the missing factor here and what is the challenging obstacle ? For simplicity, one can consider the exact NPG instead of stochastic NPG.
Xiao, Lin (2022). On the Convergence Rates of Policy Gradient Methods. Journal of Machine Learning Research 23.282, pp. 1–36.
Li, Yan, Tuo Zhao, and Guanghui Lan (2022). Homotopic Policy Mirror Descent: Policy Convergence, Implicit Regularization, and Improved Sample Complexity.
Lan, Guanghui (Apr. 2022). Policy mirror descent for reinforcement learning: linear convergence, new sampling complexity, and generalized problem classes. Mathematical Programming.
Yuan, Rui, Simon Shaolei Du, Robert M. Gower, Alessandro Lazaric, and Lin Xiao (2023). Linear Convergence of Natural Policy Gradient Methods with Log-Linear Policies. In International Conference on Learning Representations.
Alfano, Carlo, Rui Yuan, and Patrick Rebeschini (2023). A Novel Framework for Policy Mirror Descent with General Parametrization and Linear Convergence.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer understood and recognized the contribution of the work. We answer the questions as follows.
>**figures ... y-axis**
We mentioned that $\pi_{\theta}^\top r$ is used as the vertical axis value in Line 148, and will add this to the figure.
>**Line 52-53 ... approximation error ... Alfano et al. (2023)**
Thank you for pointing out the work on PG using overparameterized NNs. We will cite it and mention that zero function approximation error results are useful in this setting, as suggested.
>**Line 361-362**
Thank you for the suggestions. We present our current understanding and evidences. Also, please check the common feedback for details.
>**Clarification question ... open problem**
Thank you for checking Agarwal et al. (2021) very carefully. It is not explicitly mentioned in that paper **if Softmax PG achieves global convergence with zero approximation error** is an open problem. However, to the best of our knowledge and after extensive communication with the authors of Agarwal et al. (2021), the common understanding is that this problem has remained open in the sense that it has not been solved before. We will clarify in revised versions.
>**optimal action is not unique**
Yes, $\pi_{\theta_t}^\top r \to r(a^*)$ will hold. Consider if there are two optimal actions $a_1^*$ and $a_2^*$ with $r(a_1^*) = r(a_2^*)$. If the plateaus for those two actions are separated on the landscape in Figure 1 (this is an intuitive statement), it will follow that either $\pi_{\theta_t}(a_1^*) \to 1$ or $\pi_{\theta_t}(a_2^*) \to 1$ depending on where $\theta_1$ is initialized (the arguments are almost identical as the unique optimal action case). Otherwise, if the plateaus for those two actions are connected on the landscape (neighborhoods of one another), then we will have $\pi_{\theta_t}(a_1^*) + \pi_{\theta_t}(a_2^*) \to 1$, since the current analysis only asserts that $\theta_t$ approaches the ``joint'' optimal plateau formed by $a_1^*$ and $a_2^*$.
>**entropy regularized bandit case**
Thank you for asking this question. Our speculation is that the answer is yes. In particular, we believe that when the temperature is small enough (entropy does not have much weight comparing to reward), the landscape is modified so that the stationary point is "moved" from a one-hot policy to a finite stationary point, hence the nice "ordered" landscape is still preserved by adding entropy. Further study is required to rigorously show or disprove this speculation.
>**NPG with non-adaptive geometrically increasing step size still hold**
Thank you for pointing this out. We have checked and do not see any difficulty that prevents this generalization.
>**As for the general MDP ... advantage function / Q-function ... challenging obstacle**
Thank you for bringing up this idea, which seems to be very reasonable. As mentioned in the common feedback, considering the advantage function / Q-function is also the idea we were looking at. Difficulties are also explained in the common feedback. We believe additional efforts are needed to make this idea work in MDPs.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thorough and clear response, especially for the discussion of generalization to MDPs. I encourage the authors to present discussions of other cases in the revised version, such as the case where the optimal action is not unique, mentioned by Reviewer WYfH and myself, the case of entropy regularized bandits, and the linear feasibility problem mentioned by Reviewer hbxK. Overall, I am inclined to maintain my score. I will also pay attention to Reviewer WYfH's upcoming comments to see if their concerns on the proofs are cleared up. | Summary: The paper studies softmax policy gradient and natural policy gradient methods for multi-arm bandits problems using linear function approximation. The authors provide examples to illustrate the global convergence of these methods when the standard function approximation error is not zero. To better characterize the global convergence, the authors provide the ordering-based conditions on rewards. Some numerical examples are provided to verify the proposed conditions.
Strengths: **orginality**
- The studied deterministic policy gradient methods are well-known in the literature. The authors re-revisit the convergence of these methods for multi-arm bandits with linear function approximation. Such convergence hasn't been studied directly.
- The authors show necessary (and sufficient) conditions for two policy gradient methods to converge in the linear function approximation setting. The analysis generalizes some similar analysis, e.g., reference [18] to linear function approximation. Technical comparisons with the existing analysis are not very clear.
**quality**
- It seems that the global convergence is abused in some way, since the existing convergence studies in the linear function approximation investigate different sub-optimality gaps, e.g., [4] is based on the regret analysis while [24] utilizes the mirror descent analysis.
- It is not clear how to interpret 'Approximation error is not a key quantity for characterizing global convergence in either algorithm'. The reference [24] shows that zero approximation error leads to global convergence. Although this result does not show necessity, approximation error is still an important quantity we use in practice.
- Compared with approximation error, it is less clear how to check optimal action perseveration condition, especially when the action space is large. So, weaknesses haven't been clearly stated.
**clarity**
- The paper is structured well, but it lacks of technical comparisons with existing results.
- The visualization is not clearly stated, e.g., axes, error bars, speed.
**significance**
- Since the function approximation is widely used in reinforcement learning, this work is important. It provides new understandings of policy gradient methods in bandit cases.
Weaknesses: - Technical comparisons with existing results are not detailed, sometimes vague. For instance, in line 96 why (4) and (5) for bandits can be used as RL methods; in line 102 which work provides $1/\sqrt{t}$ rate; line 121, why 'insufficiency'; line 152, how to check global convergence; line 191, what of algorithms the quantity must depend on? Please state claims with concrete justifications.
- Optimal action perseveration generalizes the analysis in reference [18]. It is similar to the gap of optimal action and second optimal action used in literature, e.g., reference [12] and the paper: Regret Analysis of a Markov Policy Gradient Algorithm for Multiarm Bandits. These quantities seem to be known important for global convergence and the authors generalize them to linear function approximation. Therefore, it is important to clarify the connections and position the work in the literature properly.
- The usefulness of proposed ordering-based conditions is still questionable. First, the generalization to MDPs is not provided. Second, it is not clear how to check such conditions when state/action spaces are large, even infinite.
- The paper focuses on multi-arm bandits and deterministic policy gradient methods, which have a large gap with reinforcement learning. This work seems to be still in progress and significant effort is needed to generalize this work for serious publication.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see questions in Strengths and Weaknesses.
Here are some other questions.
- Why is it fair to compare function approximation error with ordering-based conditions or optimal action gap conditions? The function approximation error is a general quantity that does not depend on MDP structures, while ordering-based conditions depend on the MDPs. In a basic tabular case, ordering-based conditions do not necessarily hold, e.g., equal rewards at any action, but the softmax function approximation error is zero. Hence, such conditions do not explain the basic case.
- The convergence rate seems to be vague in the main paper, which has a dependence on the gap between optimal action and second optimal action, which is known in the tabular case [12, 18]. How much does your rate analysis go beyond the existing analysis? any rate improvement?
- For softmax PG, reference [15] shows that it can take exponential time to converge in a hard MDP instance. How does ordering-based condition rule out the hard MDP instance?
- The setup is limited to deterministic algorithms with exact gradients, which is an ideal setting. Can the authors apply them to practical stochastic bandit problems?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The review seems to focus on issues arising from misunderstanding or miscommunication. We hope the following can help clarify matters.
>**global convergence is abused ...**
**First**, global convergence simply means $\pi_{\theta_t}^\top r \to r(a^*)$, i.e., policy's reward approaching that of the optimal policy, as mentioned in Lines 230 and 316. It is hard to imagine a more straightforward definition. **Second**, alternative analyses do not change the meaning of global convergence. Sub-linear regret implies that the averaged performance approaches $r(a^*)$. Our global convergence results are for last-iterate, which implies sub-linear regret.
>**... not clear how to interpret Approximation error ...**
We have explained this clearly in Section 3. **First**, Softmax PG and NPG can still achieve global convergence on Example 1 with non-zero approximation errors. This does not contradict "zero approximation error leads to global convergence". **Second**, Examples 1 and 2 have similar non-zero approximation errors but different algorithm behaviors, which makes it impossible to use non-zero approximation error to discriminate the two examples (global convergence or not). **Finally**, we agree that "approximation error is still an important quantity", but our observations also clearly show that non-zero approximation errors are not enough to characterize global convergence.
>**... less clear how to check optimal action preservation condition**
Checking the condition for NPG is no harder than checking the approximation error. To determine approximation error $\\| \hat{r} - r \\|_2 = \min\_{w \in \mathbb{R}^d} \\| X w - r \\|_2 $ one needs to calculate projection $\hat{r} := X^\top (X^\top X)^{-1} X^\top r$. Optimal action preservation $\text{argmax}\_{a \in [K]}{\hat{r}(a)} = \text{argmax}\_{a \in [K]}{r(a)}$ can be immediately verified from the same calculation. Checking the condition for PG requires a linear feasibility check (i.e., a special case of an LP), as discussed in the response to Reviewer hbxK. Note that checking conditions both PG and NPG only requires the same problem information ($X$, r, and $\hat{r}$) as checking approximation error.
>**in line 96, (4) and (5) ...**
It is well known that general Softmax PG and NPG methods are foundations of the standard RL methods mentioned. Eqs. (4) and (5) are their updates applied to the one-state setting.
>**in line 102, $1/\sqrt{t}$ ...**
Sorry for the typo. [4, Table 1] contains the correct result of an $O(1/t)$ rate.
>**line 121, insufficiency...**
As Section 3 shows, small approximation error is neither necessary nor sufficient for the global convergence of PG or NPG.
Zero approximation error (i.e., linear realizability) is not necessary in either case, although it is sufficient for NPG, and proved sufficient for softmax PG for the first time in this paper. We could consider replacing "insufficiency of" with "limitations of".
>**line 152, check global convergence...**
As shown in Figure 3(c), we use sub-optimality gap $(\pi^* - \pi_{\theta_t})^\top r$ to ascertain global convergence.
>**line 191, algorithm dependent ...**
Section 3.3 already demonstrated that the condition must depend on the specific update considered (e.g., Softmax PG vs. NPG). Therefore, one has to study the conditions for Softmax PG and NPG **separately** (rather than one condition for both algorithms).
>**... generalizes the analysis of references [18,12] ...**
Our new results for function approximation are not covered by any of those papers. The gap we consider is for $\hat{r}$, which is different than the reward gap of $r$ in those papers.
>**The usefulness ... large, even infinite.**
**First**, the common author response provides a detailed discussion of the MDP case, illustrating how the ideas provide useful initial insights for MDPs. **Second**, for large action spaces, checking new conditions are no harder than approximation error, as explained above. **Third**, for infinite action spaces, it is also infeasible to exactly determine the approximation error in general.
>**... serious publication.**
Understanding the one-state and deterministic settings are necessary first steps before understanding the more involved MDP and stochastic settings. The significance of our findings seem to be well recognized by the other reviewers.
>**Why is it fair to compare function approximation error ...**
To our knowledge, approximation error (and its variants) has been the only quantity considered for characterizing function approximation quality in PG analysis. It is therefore necessary to discuss approximation error when studying the convergence of PGs with function approximation.
>**approximation error ... does not depend on MDP ...**
The approximation error $\min\_{w \in \mathbb{R}^d}{ \\| X w - r \\|_2}$ clearly depends on the problem quantity $r$, as do the order-based conditions.
>**... do not explain the basic case.**
Consider $r = \mathbf{1} \in \mathbb{R}^K$ as mentioned. Consider $w = \mathbf{0} \in \mathbb{R}^d$, such that $r^\prime = X w = \mathbf{0}$. Note that $r^\prime$ preserves the order of $r$ by definition (for all $i, j \in [K]$, $r(i) > r(j)$ if and only if $r^\prime(i) > r^\prime(j)$ as in Line 229).
>**... any rate improvement?**
Our results address the function approximation case, which is well beyond tabular analyses [12,18]. Improving rates for the tabular setting is not a focus of this work. It is certainly possible to consider increasing stepsizes [23] for improvement, which is beyond the scope of this paper.
>**... hard MDP instance?**
It is not possible to rule out hard MDP instances in general. The tabular case is recovered as a special case (Line 232). If hard MDP instances were avoided, they would be avoided in the tabular case, which would contradict [15].
>**... stochastic bandit problems?**
This is mentioned in the conclusion as future work, and we are working to obtain results in this direction.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have improved my score and recommend the authors to take these points in revision.
1. Sub-optimality gaps in different PG methods under linear function approximation are different. The metric used in natural policy gradient (NPG) [4, Table 2] is not the same as $\pi_{\theta_t}^\top r\to r(a^\star)$.
2. The gap between optimal action and second optimal action has been used in literature to show the global convergence of PG methods. The authors shoud discuss and connect this work with them more explicitly.
3. Bandits with function approximation have a large gap with MDPs with function approximation.
4. The ordering-based conditions depend on the order while function approximation error does not. They use different information on MDP. | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful reading, valuable comments, and recognition of the contributions. This first, common feedback answers a question raised by multiple reviewers.
>**Generalization to MDPs (Reviewers m2vF, P4aQ, BpdK)**
Extending the results of this work to MDPs is an important and challenging next step as mentioned in the conclusion, and our work provides a new direction as the first step. Here we discuss some research plans, considering Softmax PG for illustration. The discussion provides some new ideas, but resolving this problem is highly non-trivial and requires further investigation.
According to the policy gradient theorem [21, Theorem 1], we have, for all $\theta_t \in \mathbb{R}^d$,
\begin{equation*}
\theta_{t+1} = \theta_t + \eta \cdot \sum_{s \in \mathcal{S}} d^{\pi_{\theta_t}}(s) \sum_{a \in \mathcal{A}} \frac{\partial \pi_{\theta_t}(s, a)}{\partial \theta_t} Q^{\pi_{\theta_t}}(s, a)
\end{equation*}
\begin{equation*}
= \theta_t + \eta \cdot \sum_{s \in \mathcal{S}} d^{\pi_{\theta_t}}(s) \cdot X_s^\top ( \text{diag}{(\pi_{\theta_t}(\cdot | s))} - \pi_{\theta_t}(\cdot | s)\pi_{\theta_t}(\cdot | s)^\top ) \ Q^{\pi_{\theta_t}}(s, \cdot),
\end{equation*}
where $X_s \in \mathbb{R}^{|\mathcal{A}| \times d}$ is the feature matrix under state $s \in \mathcal{S}$ and can be shared across multiple states. Comparing with Eq. (4), for all $s \in \mathcal{S}$, the reward vector $r \in \mathbb{R}^K$ is replaced by $Q^{\pi_{\theta_t}}(s, \cdot) \in \mathbb{R}^{|\mathcal{A}|}$, as mentioned also by Reviewer P4aQ. This fact provides some new ideas as well as difficulties.
**First**, if for all state $s \in \mathcal{S}$, the feature matrix can preserve the order of $Q^{\pi_{\theta_t}}(s, \cdot)$ for **all policies**, i.e., for all $t \ge 1$, there exists $w_t \in \mathbb{R}^d$, such that for all $s \in \mathcal{S}$, $X_s w_t \in \mathbb{R}^{|\mathcal{A}|}$ preserves the order of $Q^{\pi_{\theta_t}}(s, \cdot)$, then we have,
\begin{equation*}
\theta_{t+1}^\top w_t = \theta_t^\top w_t + \eta \cdot \sum_{s \in \mathcal{S}} d^{\pi_{\theta_t}}(s) \cdot w_t^\top X_s^\top ( \text{diag}{(\pi_{\theta_t}(\cdot | s))} - \pi_{\theta_t}(\cdot | s)\pi_{\theta_t}(\cdot | s)^\top ) \ Q^{\pi_{\theta_t}}(s, \cdot)
\end{equation*}
\begin{equation*}
\ge \theta_t^\top w_t,
\end{equation*}
generalizing Eq. (12), a key argument in the one-state setting. However, $w_t$ is changing over time, since in the update $r \in \mathbb{R}^K$ is replaced by $Q^{\pi_{\theta_t}}(s, \cdot)$, which is changing over $\theta_t$. Comparing with fixed $r$ and $w$ in Eq. (12), using the above inequality with $\theta_t$ and $w_t$ both changing over time is more challenging and manifests the major technical difficulty.
**Second**, another speculation is that preserving the order of $Q^*(s, \cdot)$ (value of the optimal policy $\pi^*$) might be enough to achieve global convergence (if true, there would be no need to preserve the values of all policies). Here we show a local convergence when $\text{softmax}(X_s \theta_t)$ is close enough to $\pi^*( \cdot | s)$. Suppose that there exists $w^* \in \mathbb{R}^d$, such that for all $s \in \mathcal{S}$, $X_s w^* \in \mathbb{R}^{|\mathcal{A}|}$ preserves the order of $Q^*(s, \cdot)$. Then for any $\theta_t$ such that $Q^{\pi_{\theta_t}}(s, \cdot)$ preserves the order of $Q^*(s, \cdot)$, we have,
\begin{equation*}
\theta_{t+1}^\top w^* = \theta_t^\top w^* + \eta \cdot \sum_{s \in \mathcal{S}} d^{\pi_{\theta_t}}(s) \cdot {w^*}^\top X_s^\top ( \text{diag}{(\pi_{\theta_t}(\cdot | s))} - \pi_{\theta_t}(\cdot | s)\pi_{\theta_t}(\cdot | s)^\top ) \ Q^{\pi_{\theta_t}}(s, \cdot)
\end{equation*}
\begin{equation*}
\ge \theta_t^\top w^*,
\end{equation*}
based on which we can show that $\theta_t$ eventually approaches the direction of $w^*$, implying that $\pi_{\theta_t}(a^* | s) = \text{softmax}(X_s \theta_t)(a^*) \to \pi^*(a^* | s) = 1$ (in combination with Lemma 1). This means that preserving the order of $Q^*(s, \cdot)$ is enough for $\pi^*$ to be a local attractor of gradient updates within its neighbourhood. One challenge here is to generalize the arguments for arbitrary initialization $\theta_1 \in \mathbb{R}^d$ rather than $\theta_t$ close enough to optimal solution, and the difficulty is that $Q^{\pi_{\theta_t}}(s, \cdot)$ does not necessarily preserve the order of $Q^*(s, \cdot)$, and the last inequality above does not necessarily hold.
**In summary**, this discussion illustrates how the paper provides some new and useful insights for understanding more complex settings, but it requires further investigation to resolve this highly non-trivial problem for general MDPs. We will use an additional page in subsequent versions to present this discussion, as Reviewer P4aQ suggested. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper considered the problem of global convergence condition of policy gradient (PG) methods with linear function approximation motivated by three observations: i) global convergence under linear function approximation can be achieved without policy or reward realizability; 2) approximation error is not a critical factor for global convergence; and 3) conditions for characterizing global conference should be algorithm-dependent. Based on these observations, the authors developed new ordering-based conditions for global convergence of PG methods: i) For Softmax PG, a sufficient condition for global convergence to occur is that the representation preserves the ranking of the rewards; and ii) For natural PG (NPG), the necessary and sufficient condition of global convergence is that the projection of the reward onto the representation space preserves the optimal action’s rank.
Strengths: 1. This paper develops a new set of global convergence conditions for PG methods with linear function approximation, which advances the state of the art of understanding PG methods (Softmax PG in particular).
2. The proof strategies and algorithm analysis techniques are novel.
3. Motivating examples in this paper are insightful.
Weaknesses: 1. The paper could benefit from constructing a bit more larger-scale experiments.
2. This paper could have some further discussions on the implications of the ordering-based conditions.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I appreciate the new findings and fresh insights of the ordering-based global convergence conditions for PG methods with linear function approximation. One immediate question that comes to my mind after reading this paper is how restrictive (or non-restrictive) these ordering-based conditions are. For example, for the Softmax PG method, it appears to me that given $r$ and $X$, checking the existence of a $w$ that preserves the reward ranking may not be a simple task. Is there a systematic way to check the existence of such a $w$? Is there any other condition equivalent to order-preserving but easier to identify and check? Could the authors discuss how large the subspace of $X$ in $\mathbb{R}^{d}$ that satisfies the order-preserving condition is? The same questions can also be asked for NPG.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer understands and recognizes the contributions of this work. We address the main concerns as follows.
>**Is there a systematic way to check the existence of such a $w$ given $r$ and $X$:**
Yes, checking the existence of $w$ is known as **linear feasibility** in the literature (Grötschel et al., 2012), i.e., determining whether a set of inequalities has a non-empty intersection. In particular, suppose $X \in \mathbb{R}^{K \times d}$ and $r \in \mathbb{R}^K$ are given and that $r$ is sorted, i.e., $r(1) \ge r(2) \ge \cdots \ge r(K)$. Denote $x_i \in \mathbb{R}^d$ as the $i$th row vector of $X$. The linear feasibility problem in this case is to check if there exists a $w \in \mathbb{R}^d$, such that, for all $i \in [K-1]$,
\begin{align}
x_i^\top w \ge x_{i+1}^\top w.
\end{align}
Linear feasibility can be cast as linear programming using a dummy objective and keeping the constraints, hence any LP technique, such as the ellipsoid method, can be used to solve it (Grötschel et al., 2012).
>**How large the subspace of $X$ that satisfies the order-preserving condition is? The question could also be asked for NPG.**
That is an interesting question. **First**, this work shows that the space is strictly larger than the set of $X$s that satisfy linear realizability / zero approximation error (i.e., the set of $X$ such that there exists $w \in \mathbb{R}^d$ to satisfy $X w = r$). From Line 232 in the paper, we know that zero approximation error implies order preservation, but **not vice versa** (Examples 1 and 3). **However**, determining how much larger the space is would require choosing a metric, such that we can compare space sizes. This needs further investigation.
[1] Martin Grötschel, László Lovász, and Alexander Schrijver. Geometric algorithms and combinatorial
optimization, volume 2. Springer Science & Business Media, 2012. | null | null | null | null | null | null |
Weakly-Supervised Audio-Visual Segmentation | Accept (poster) | Summary: This paper presents a new framework for weakly supervised audio-visual segmentation which does not need pixel level annotations. This is achieved via the proposed multi-scale multiple-instance contrastive learning approach which can capture audio-visual alignment in multiple scales. Comparison with existing methods show that the proposed methodology leads to state-of-the-art results.
Strengths: The paper proposes an interesting approach ti audio-visual segmentation which does not rely on pixel-level annotations. This is an important contribution since obtaining such annotations is time consuming.
The paper is easy to follow.
Convincing results are presented and a detailed ablation study is presented.
Weaknesses: The main weakness of the paper is that several important details are not described.
- In table 1, results for multiple sources are presented without providing details, e.g., how many sources are used, how were these examples obtained?
- It is not clear how the multi-scale features are generated.
- How are the negative examples selected in multi-scale multiple instance contrastive learning? Would be very helpful if the authors elaborate on this.
- Equal weights are used for the two loss terms in Eq. 8. Have the authors considered using different weights? This might improve the performance.
- It is not clear why the face and body are predicted as the sound producing regions in case a person speaks. Ideally, just the face or just the mouth should be identified as the sound source.
- The audio which corresponds to the images in Fig.2 is not presented. Most of the examples are included in the supplementary material so adding such a comment in the caption would be helpful.
- Finally, it would be helpful to add additional failure cases. Only one example is included in the supplementary material.
Another weakness, is that only one dataset is used for evaluation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the questions above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As mentioned above, it would be desirable if the authors include additional failure cases in the supplementary material. At the moment there is only one.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer XL4i,
Thank you for appreciating our approach. We address your comments below.
> Details on multiple sources.
We use two or three sound sources from 2,120 frames for evaluation. For pseudo-labels for multi-source scenarios, we use the background activation maps as pseudo-labels and categories of each source to further train a salient object [67]. Then generated pseudo-mask separately for each sound source according to Equation (6) by replacing the salient regions with class-specific salient regions $S^\prime\in\mathbb{R}^{M\times H\times W}. Finally, we apply the loss defined in Equation (7) on each source. We will add this clarification to the revision.
> Clarification on multi-scale features.
The multi-scale features with sizes of {56x56, 28x28, 14x14, 7x7} are generated from four stages ([3,4,6,3]) of the ResNet50-based image encoder.
> Negative examples.
Following the spirit of contrastive learning methods, negative examples in multi-scale multiple instance contrastive learning are obtained from other images in the same mini-batch.
> Different weights.
Yes. We have explored different weights but did not see significant improvements (0.1%) in the performance.
> Clarification on the prediction of face and body as the sound producing regions.
This new weakly-supervised AVS problem is challenging to generate a fine-grained sound source from a small object, such as the mouth. Meanwhile, many objects with diverse semantic might be included in the video frames. Even for the ground-truth, the face and body are also annotated as the sounding sources when a person speaks or sings.
> Adding an audio comment.
Thanks for the suggestion. We will add the comment to the caption in the updated version.
> More failure cases.
The model may fail when two sources have very similar visual semantics, e.g., a mixture of people cheering and people crowd. We will add more failure cases to the supplementary.
> Evaluation of more datasets.
This is a great suggestion! We extended our method to experiments on Flickr-SoundNet and VGG Sound Source, and reported the comparison results of CIoU with existing approaches in the Table below. Compared to previous methods, our WS-AVS achieves the best results in both benchmarks.
| Method | Flickr-SoundNet | VGG Sound Source |
| ---- | :----: | :----: |
| DSOL [a] | 74.00 | 29.91 |
| EZVSL [b] | 83.94 | 38.85 |
| SLAVC [c] | 86.40 | 39.80 |
| WS-AVS (ours) | **91.63** | **42.72** |
To verify the robustness of our proposed approach to off-screen/silent objects, we evaluated our WS-AVS on Extended Flickr-SoundNet and Extended VGG-SS proposed in [c]. The comparison results of max-F1 with previous methods are reported in the Table below. Compared to the state-of-the-art frameworks, our WS-AVS achieves the best performance in both datasets.
| Method | Extended Flickr-SoundNet | Extended VGG Sound Source |
| ---- | :----: | :----: |
| DSOL [a] | 49.40 | 25.60 |
| EZVSL [b] | 54.60 | 30.90 |
| SLAVC [c] | 60.10 | 41.50 |
| WS-AVS (ours) | **65.27** | **52.38** |
**References**
[a] Hu, et al. Discriminative Sounding Objects Localization via Self-supervised Audiovisual Matching. In NeurIPS, 2020.
[b] Mo, et al. Localizing Visual Sounds the Easy Way. In ECCV, 2022.
[c] Mo, et al. A Closer Look at Weakly-Supervised Audio-Visual Source Localization. In NeurIPS, 2022.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concerns. Please make sure that you include the additional results and explanations in the paper.
---
Rebuttal Comment 1.2:
Comment: The authors have addressed my concerns. Please make sure that you include the additional results and explanations in the paper.
---
Reply to Comment 1.2.1:
Title: Response to the reviewer
Comment: Dear Reviewer XL4i,
Thank you for your response and we appreciate your support. We will incorporate the additional results and explanations you mentioned into the revised version of the paper. We believe that these additions will further strengthen the paper and provide a more comprehensive evaluation of our method.
Thank you once again for your valuable feedback and for your continued support throughout the review process. | Summary: This work introduces a new setting for audio-visual segmentation, Weakly-Supervised Audio-Visual Segmentation (WS-AVS). The authors address the challenge of the costly and not always available pixel-level masks by employing weakly-supervised audio-visual segmentation. This framework uses multi-scale multiple-instance contrastive learning for capturing multi-scale audio-visual alignment. Tested on the AVS Bench, WS-AVS demonstrates superior performance in both single-source and multi-source scenarios when compared to previous methods.
Strengths: 1. To eliminate the requirement of pixel-level annotations, this work propose to solve the audio-visual segmentation problem in a weakly-supervised way. Correspondingly, it uses adopts Audio-Visual Fusion with Multi-scale Multiple-Instance Contrastive Learning and Pseudo Mask Refinement by Contrastive Class-agnostic Maps to solve the challenges in the weakly supervised setting.
2. The authors provide comprehensive evaluation under various setting to illustrate the performance of the proposed method.
Weaknesses: 1. Clarifying Novelty
The paper's novelty looks somewhat unclear. This reviewer recommends clearly defining the unique technical contributions proposed by this work, especially those set apart from the developments in weakly-supervised segmentation. For instance, it would be advantageous to underscore the importance of Multi-scale Multiple-Instance Contrastive Learning in the given task and its distinctiveness from similar concepts previously employed in weakly-supervised segmentation techniques.
2. Further analysis
The authors offer sufficient ablation studies to ascertain the performance improvement attributed to AVF and PMR. However, the reviewer anticipates a more in-depth analysis of the outcomes. Specifically, what distinguishes the audio-visual relationship as learned via supervised methods from those learned in weakly-supervised ways? Considering the final classification outcome, it's intuitive to hypothesize that weakly-supervised methods might result in a sub-optimal or noisy audio-visual mapping. But, without human bias, these weakly-supervised maps might demonstrate superior generalization or aid in identifying human annotation-induced preconceptions. The reviewer encourages exploring these insights rather than viewing weakly-supervised audio-visual segmentation merely as a workaround for data limitations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The reviewer believes the task of weakly-supervised audio-visual segmentation is worth exploring, and hence currently vote for borderline accept. However, the authors should adequately discuss the key novelty of this paper for the final acceptance.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer DJ6f,
Thank you for appreciating our approach. We address your comments below.
> Clarification on contributions/novelty.
Our paper includes two main technical contributions:
1) We are the first to investigate a new weakly-supervised multi-modal problem that predicts sounding object masks from both audio and image without needing pixel-level annotations.
2) We present a simple but effective framework for weakly-supervised audio-visual segmentation by combining multi-scale multiple-instance contrastive learning and pseudo mask refinement to achieve state-of-the-art results on single-source and multi-source sounding object segmentation.
Based on the new problem and effective method, we believe that our paper has great potential to become a good benchmark approach for weakly supervised audio-visual segmentation.
> Further analysis on audio-visual matching in supervised/weakly-supervised AVS.
This is a good suggestion! To explore the effect of audio-visual matching loss introduced in supervised AVS [1] for multi-source segmentations, we added the audio-visual matching loss to both the baseline and our WS-AVS, and reported the comparison results on multi-source segmentations under weakly-supervised AVS setting in the Table below. Applying this audio-visual matching loss benefits the audio-visual fusion for weakly-supervised AVS in terms of all metrics. Besides, the magnitude of improvement is large than 1% reported in supervised AVS [1], which indicates that the audio-visual relationship learned in weakly-supervised AVS is more important than that learned from supervised settings.
| Method | AVM | mIoU (↑) | F-score (↑) |
| ---- | :----: | :----: | :----: |
| AVS | ✗ | 8.76 | 15.72 |
| AVS | ✓ | **9.85** | **17.36** |
| WS-AVS (ours) | ✗ | 30.85 | 46.87 |
| WS-AVS (ours) | ✓ | **32.03** | **49.15** |
---
Rebuttal Comment 1.1:
Comment: After reading the rebuttal and the comments from the other reviewers, I tend to keep my current score. Thanks for addressing the concern and conducting the experiments. | Summary: This paper proposes a new contrastive learning approach with pseudo-mask generation/refinement process for weakly supervised video segmentation with audio guidance. The authors show that their method outperforms several other state-of-the-art methods for the visual segmentation task when training and testing on the AVSBench dataset.
Strengths: - The paper shows strong results compared to other methods in the literature.
- The paper tries to tackle a very important problem in the vicinity of audio-visual perception which is how to find the pixel regions that correspond to sounding sources when limited annotated data are available.
- The paper is well written and the ideas are clearly presented.
Weaknesses: Unfortunately the paper also contains several weaknesses that narrow down the scope and the potential impact of the paper that could be addressed using experiments and rewriting some parts of the manuscript when needed.
- The authors only include experiments with a very small scale in terms of multiple aspects (video frames, classes of sounds only 23 objects, number of videos < 5k) to compare their method with other established methods. Although having smaller scale experiments help with finetuning of the hyperparameters and early experimentation, it becomes difficult to extrapolate the findings of this study to large scale solutions and real-world data with only showing results on the chosen dataset. Some dataset that come on top of my head and could be used by the authors to enrich the results and show the true potential of their method would be some more limited datasets like Flickr-SoundNet [A], a medium-sized mostly single-source dataset like VGG Sound Source [B] or a real-world video dataset like AudioSet [C] or YFCC100M [D].
- Building upon my previous argument, using more realistic and large scale datasets would not only put away potential skepticism revolving around the validity of training neural networks with only 5k examples and limited video resolutions (e.g. capturing motion features when the average video has 2 frames is almost impossible for the dataset used in this paper) but would also show whether the proposed approach is robust when trained with videos that contain no on-screen objects which is very difficult to solve for related problems like on-screen sound separation [E]. I don’t want to extend the problem to cases where more than one sounds appear on-screen / off-screen and thus some audio separation front-end might be needed before obtaining the audio features but I would still expect some more convincing large-scale experiments.
- In the current version of the paper, the authors only consider evaluation datasets where the sounding object appears on-screen at some point in the video but they completely omit to report the performance for off-screen-only videos. To that end, the authors should also evaluate the performance of their model when there is some visual activity on-screen but the sounding object/action cannot be seen in camera and test how robust their model is. According to my experience, there will be a trade-off in the sensitivity of how accurate the model will predict the masks for on-screen objects which correspond to the on-screen audio and how well the model will predict zeros for on-screen objects and actions that do not make any sound (e.g. a human talking in the background while a non-talking human is displayed on-screen).
Overall, I am willing to increase my score if most of the most important above concerns are addressed by the authors since I truly believe that the paper has a great potential to become a good benchmark method for weakly supervised audio-guided video segmentation.
[A] Arda Senocak, Tae-Hyun Oh, Junsik Kim, Ming-Hsuan Yang, and In So Kweon. Learning to localize sound source in visual scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4358–4366, 2018.
[B] Honglie Chen, Weidi Xie, Andrea Vedaldi, and Andrew Zisserman. Vggsound: A large-scale audio-visual dataset. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020.
[C] Gemmeke, J.F., Ellis, D.P., Freedman, D., Jansen, A., Lawrence, W., Moore, R.C., Plakal, M. and Ritter, M., 2017, March. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 776-780). IEEE.
[D] Thomee, B., Shamma, D.A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Borth, D. and Li, L.J., 2016. YFCC100M: The new data in multimedia research. Communications of the ACM, 59(2), pp.64-73.
[E] Tzinis E, Wisdom S, Jansen A, Hershey S, Remez T, Ellis D, Hershey JR. Into the Wild with AudioScope: Unsupervised Audio-Visual Separation of On-Screen Sounds. In International Conference on Learning Representations 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Have you considered an analysis of the proposed approach for the choice of the model for ensuring robustness towards the “reliable pseudo mask extraction with multi-scale visual features”?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discuss some limitations of their work and I think they should include the issues that I raised in the Section above and that they will not be able to address through experiments and/or rebuttal.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer yiGS,
Thank you for the detailed review. We will address your concerns below.
> Large-scale experiments.
This is a great suggestion! We extended our method to experiments on Flickr-SoundNet and VGG Sound Source, and reported the comparison results of CIoU with existing approaches in the Table below. Compared to previous methods, our WS-AVS achieves the best results in both benchmarks.
| Method | Flickr-SoundNet | VGG Sound Source |
| ---- | :----: | :----: |
| DSOL [a] | 74.00 | 29.91 |
| EZVSL [b] | 83.94 | 38.85 |
| SLAVC [c] | 86.40 | 39.80 |
| WS-AVS (ours) | **91.63** | **42.72** |
> Robustness to non-sounding and off-screen objects.
This is a good suggestion! To verify the robustness of our proposed approach to non-sounding and off-screen objects, we evaluated our WS-AVS on Extended Flickr-SoundNet and Extended VGG-SS proposed in [c]. The comparison results of max-F1 with previous methods are shown in the Table below. Compared to the state-of-the-art frameworks, our WS-AVS achieves the best performance in both datasets.
| Method | Extended Flickr-SoundNet | Extended VGG Sound Source |
| ---- | :----: | :----: |
| DSOL [a] | 49.40 | 25.60 |
| EZVSL [b] | 54.60 | 30.90 |
| SLAVC [c] | 60.10 | 41.50 |
| WS-AVS (ours) | **65.27** | **52.38** |
> Robustness to off-screen objects.
Please see the response above.
> Analysis of the choice of multi-scale visual feature maps for pseudo mask extraction.
Yes. This is a good question! We abated the stage of multi-scale visual feature maps from {1, 2, 3, 4} for pseudo-label generation, and reported the comparison results on multi-source segmentations in the Table below. When the number of stages is 4, we achieve the best segmentation performance in terms of all metrics. With the increase of feature stages from 1 to 4, the proposed WS-AVS consistently raises results, which shows the importance of multi-scale visual features in pseudo-label generation.
| Feature Stage | mIoU (↑) | F-score (↑) |
| ---- | :----: | :----: |
| 1 | 26.78 | 31.52 |
| 2 | 27.19 | 33.56 |
| 3 | 28.65 | 37.16 |
| 4 | **30.85** | **46.87** |
**References**
[a] Hu, et al. Discriminative Sounding Objects Localization via Self-supervised Audiovisual Matching. In NeurIPS, 2020.
[b] Mo, et al. Localizing Visual Sounds the Easy Way. In ECCV, 2022.
[c] Mo, et al. A Closer Look at Weakly-Supervised Audio-Visual Source Localization. In NeurIPS, 2022.
---
Rebuttal Comment 1.1:
Title: Additional response to the reviewer
Comment: Dear Reviewer yiGS,
Thank you for your detailed review and the valuable feedback. We have carefully addressed each of your concerns and provided clarifications in our previous response. We would like to kindly request your response to the provided explanations and revisions.
We appreciate your thorough evaluation of our work, and your feedback will greatly contribute to the improvement of our manuscript.
Thank you for your continued engagement and support. | Summary: The authors propose a weakly-supervised audio-visual segmentation framework called WS-AVS that predicts sounding source masks from audio and images without pixel-level ground truth masks. It leverages multi-scale contrastive learning in audio-visual fusion to capture multi-scale alignment, addressing modality uncertainty in weakly supervised segmentation. The refined pseudo masks guide training, enabling generating accurate segmentation masks. Experiments show it outperforms weakly-supervised baselines.
Strengths: The manuscript presents a solution to the challenging problem of performing segmentation tasks without pixel-level annotations by effectively integrating multi-scale contrastive learning and pseudo mask refinement. It is well written and easy to read through.
Weaknesses: 1. Although these methods have been proven effective independently, their combination in this manuscript does not elevate them to a novel plane but rather presents a blend of established tactics, the novelty aspÅect therefore seems insufficient.
2. While the max-pooled cosine similarity function works well in [9], is there any other similarity function that can be compared with, as AVS has multi-source scenarios?
3. As the WS-AVS framework relies heavily, to some extent, on the generation of pseudo-labels, so is there any other ablations on pseudo-label generation?
4. Noticed that there is no schematic for multi-source in the manuscript, wondering what the pseudo-label for multi-source looks like.
5. In Line 188-190, the authors wrote: the recent audio-visual segmentation baseline [1] does not give any cross-modal constraint on the audio and visual representations in the audio-visual fusion, which causes a significant performance drop when removing the ground-truth masks during training. I notice that [1] proposed an audio-visual matching loss that is used in the multiple sound source case as [1] claimed this loss does not bring further improvement in single-source case. I guess the pixel-level ground truth provides enough supervision in single-source AVS. However, in the studied weakly-supervised AVS setting, will this loss help in the audio-visual fusion?
5. Some minor issues:
Typo. Line 54, “WS-VAS, that” should be “WS-AVS, which”
Typo. Line 176, “V^s” should be “V^s_i”
Typo. Line 194 “WSSL” should be “WSSS”
The result in F-score of Multiple Source in Table 1 should be bold.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: As above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer CQaV,
Thank you for the detailed review. We will address your concerns below.
> Clarification on novel aspects.
Our paper includes two main novel aspects:
1) We are the first to investigate a new weakly-supervised multi-modal problem that predicts sounding object masks from both audio and image without needing pixel-level annotations.
2) We present a novel framework for weakly-supervised audio-visual segmentation with multi-scale multiple-instance contrastive learning to achieve state-of-the-art results on single-source and multi-source sounding object segmentation.
> Other similarity function.
Yes. We tried to replace the cosine similarity function with the Kullback–Leibler divergence in multi-source scenarios. The results are shown in the Table below. As can be seen, using the max-pooled cosine similarity function achieves much better performance in terms of both metrics.
| Similarity Function | mIoU (↑) | F-score (↑) |
| ---- | :----: | :----: |
| Kullback–Leibler divergence | 24.85 | 29.69 |
| cosine similarity | **30.85** | **46.87** |
> Ablations on multi-scale visual feature maps for pseudo-label generation.
This is a good suggestion! We abated the stage of multi-scale visual feature maps from {1, 2, 3, 4} for pseudo-label generation, and reported the comparison results on multi-source segmentations in the Table below. When the number of stages is 4, we achieve the best segmentation performance in terms of all metrics. With the increase of feature stages from 1 to 4, the proposed WS-AVS consistently raises results, which shows the importance of multi-scale visual features in pseudo-label generation.
| Feature Stage | mIoU (↑) | F-score (↑) |
| ---- | :----: | :----: |
| 1 | 26.78 | 31.52 |
| 2 | 27.19 | 33.56 |
| 3 | 28.65 | 37.16 |
| 4 | **30.85** | **46.87** |
> Pseudo-label for multi-source scenarios.
For pseudo-labels for multi-source scenarios, we use the background activation maps as pseudo-labels and categories of each source to further train a salient object [67]. Then generated pseudo-mask separately for each sound source according to Equation (6) by replacing the salient regions with class-specific salient regions $S^\prime\in\mathbb{R}^{M\times H\times W}$, where $M$ denotes the number of sound sources. Finally, we apply the loss defined in Equation (7) on each source. We will add this clarification to the revision.
> Audio-visual matching loss in the weakly-supervised AVS.
This is a good suggestion! We tried to add the audio-visual matching loss to both baseline and our WS-AVS, and reported the comparison results on multi-source segmentations under weakly-supervised AVS setting in the Table below. Applying this loss benefits the audio-visual fusion for weakly-supervised AVS in terms of all metrics.
| Method | AVM | mIoU (↑) | F-score (↑) |
| ---- | :----: | :----: | :----: |
| AVS | ✗ | 8.76 | 15.72 |
| AVS | ✓ | **9.85** | **17.36** |
| WS-AVS (ours) | ✗ | 30.85 | 46.87 |
| WS-AVS (ours) | ✓ | **32.03** | **49.15** |
> Minor issues.
Thanks for spotting these. We have fixed them.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer CQaV,
Thank you for your thorough review and valuable feedback. We have carefully addressed each of your concerns and provided clarifications in our previous response. We would like to kindly request your response to the provided explanations and revisions.
We appreciate your engagement and the time you have dedicated to reviewing our work. Your feedback will greatly contribute to the improvement of our manuscript.
Thank you for your continued support.
Title: Additional response to the reviewer
---
Reply to Comment 1.1.1:
Title: Additional response
Comment: Dear Reviewer CQaV,
Thank you for your insightful review and valuable feedback. We have carefully considered your comments and suggestions and have made the necessary revisions to address them. We are pleased to inform you that we have addressed all the concerns you raised, including clarifying the novel aspects of our work, providing comparisons with alternative similarity functions, presenting ablations on multi-scale visual feature maps for pseudo-label generation, explaining the pseudo-labels for multi-source scenarios, and discussing the impact of the audio-visual matching loss in weakly-supervised AVS.
We kindly request your response to our rebuttal. We hope that our revisions have sufficiently addressed your concerns and improved the overall quality and clarity of our work. We appreciate your time and expertise in reviewing our paper, and we look forward to your feedback.
Thank you once again for your valuable input.
---
Rebuttal Comment 1.2:
Comment: Thanks for your reply.
After carefully reading the rebuttal, my concern has been addressed. However, I still find the novelty to be weak in my opinion.
I will raise my score accordingly.
---
Reply to Comment 1.2.1:
Title: Response to the revierwer
Comment: Dear Reviewer CQaV,
We sincerely appreciate your careful consideration of our rebuttal and the time you have dedicated to reviewing our paper. We are glad to hear that your concerns have been addressed through our response.
We acknowledge and respect your viewpoint regarding the perceived weakness in the novelty of our contribution. While we maintain a different perspective, we recognize that differing interpretations can lead to diverse assessments of novelty. In response to your concerns, we would like to reiterate the unique contributions and advancements our work offers in the field.
Our paper addresses **a significant problem** in the realm of audio-visual segmentation, specifically the identification of pixel regions corresponding to sound sources **when limited annotated data are available**. This contribution is of paramount importance as obtaining precise pixel-level annotations is a time-consuming endeavor.
To mitigate the reliance on pixel-level annotations, we propose **a novel approach to address the audio-visual segmentation problem in a weakly supervised manner**. Our method employs Audio-Visual Fusion with Multi-scale Multiple-Instance Contrastive Learning and Pseudo Mask Refinement by Contrastive Class-agnostic Maps to overcome the challenges posed in the weakly supervised setting.
Furthermore, we provide a comprehensive evaluation of our proposed method under **various settings (single-source, multiple-source, and off-screen/silent objects)** to demonstrate its performance. Our paper presents robust results when compared to existing methods in the literature, validating the efficacy and effectiveness of our approach.
Thank you once again for your valuable input and for adjusting your score based on your assessment. We are grateful for your expertise and contribution to the review process. We would appreciate it if you could elaborate on which way our paper lacks novelty. We couldn't understand from the review. | Rebuttal 1:
Rebuttal: Dear all reviewers:
We extend our heartfelt gratitude to each of you for generously dedicating your valuable time and expertise to reviewing our work. We acknowledge and deeply appreciate the insightful comments and critiques provided by all the reviewers. In response to your invaluable feedback, we have made significant revisions to our manuscript, aiming to address each of your concerns in a comprehensive and scholarly manner. Reviewer CQaV and Reviewer yiGS, we kindly request your reconsideration of your decision, given that we have taken utmost care to thoroughly address the main comments raised in your reviews.
Once again, we express our sincere appreciation for your valuable contributions to the review process. Your expertise and guidance have been invaluable in improving the quality of our work. We remain committed to continuous improvement and eagerly await your final decision. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
D-Separation for Causal Self-Explanation | Accept (poster) | Summary: In this paper, the authors focus on self-explaining rationalization and they propose a Minimum Conditional Dependence (MCD) criterion to select causal rationales from a causal perspective.
Strengths: 1. Rationalization is a worthwhile direction to explore in interpretable research.
2. The paper is well written.
Weaknesses: 1.This paper lacks novelty. First of all, the causalities in rationalization has been widely discussed[1][2][3]. Besides, the method proposed by the authors is very similar to the previously proposed method[4][5]. Authors do not compare them as baselines and not cite these research paper.
2. The datasets in this paper are limited and I would expect to see more datasets such as ERASER[6] that have been widely used for rationalization[7][8][9].
Furthermore, both BeerAdvocate and HotelReview seem to focus on the binary classification, have the authors studied the effectiveness of MCD on multi-classification tasks, such as DARE[10] and InfoCAL[5] do?
3. Can the authors provide an example to show that the rationales extracted by MCD extraction is not affected by spurious correlations in the data.
4. The recent work is insufficient where much of the recent work on the rationalization is not discussed, such as [3][4][5][6][8][10].
References
[1] Invariant rationalization.
[2] Interventional rationalization.
[3] Discovering Invariant Rationales for Graph Neural Networks.
[4] How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking
[5] Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration
[6] ERASER: A benchmark to evaluate rationalized NLP models.
[7] An information bottleneck approach for controlling conciseness in rationale extraction.
[8] Unifying model explainability and robustness for joint text classification and rationale extraction.
[9] UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
[10] DARE: Disentanglement-augmented rationale extraction
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.Why do the authors use different baselines in BeerAdvocate and HotelReview?
2.The authors' discussion of rationalization+LLMs is insufficient, I would like to see more of the authors' views on rationalization+LLMs.
3. The authors' experiments focus on the text classification task, would MCD be equally effective on other nlp tasks, such as text generation?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See Weaknesses for details.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable comments and suggestions.
**A1 (Novelty):**
Regarding the comparison with [1,2,3], we agree that the causality in rationalization is not a new topic. However, our research differs significantly from [1,2,3]. The limitaions of [1,2] ([3] is similar to [1]) have been discussed in details in Sec.1 and App.C. For the methodology, our approach is not inspired by either IRM or backdoor adjustment. The analyses on rationale causality are also very different from [1,2]. We also provide a very different theoretical perspective linking causality to our MCD criterion, which is model-agnostic and could enhance other methods. Furthermore, our performance significantly outperforms [1] and [2] in terms of F1 score.
Compared to [4], our research targets a different problem, guided by unique motivations and novel theoretical insights. While [4] focuses on post-hoc explanation, our work is centered on self-explanation literature, leading to different challenges (faithfulness for post-hoc explanation (page 4 in [4]) and plausibility for self-explanation (L27-65)) and targets. Besides, [4] doesn't touch any analysis about the causality. The focus of [4] is developing differentiable masking. Our main contributions, the analyses on rationale causality, diverge from [4].
[5] may appears similar as it uses a discriminator to make the guider predictor and the original predictor encode indistinguishable information; however, our goals and theoretical insights differ significantly from [5]. For example, [5] cannot guarantee that causal rationales will get lower $l_d$ (the discrimination loss in [5]) than spurious correlations. Also, [5] doesn't align closely with our research problem (spurious correlations and degeneration), and its code isn't accessible.
Could you please provide more details about the similarity so that we can better address your concerns? We appreciate your suggestion and will discuss [3,4,5] in Sec.2.
We respectfully highlight that our work is not only about proposing a new method, but also about linking two key problems to the flexible MCD criterion. As far as we know, the degeneration has never been considered as a non-causality problem, even in [1,2].
**A2 (datasets):** Our experiments focus on testing the efficacy of our MCD in selecting causal rationales, which calls for specific dataset properties. First, the main challenge should be to distinguish between association and causation rather than other problems, which requires there to be some spurious correlations in the dataset. Multi-aspect classification datasets satisfy this property well, since there are usually correlations between different aspects. Other datasets in ERASER don't satisfy this property. Second, there should be human-annotated causal rationales in the test set to measure whether the selected rationales are real causal rationales or just associated with the label. The LJP dataset used in [5,10] doesn't satisfy this property. [5,10] only evaluate the prediction performance on this dataset, which is only association, not causation.
**A3 (examples):** Thank you for your suggestion. It was an oversight on our part to leave that out. We have added some examples to Fig.1 and 2 of the **rebuttal.pdf**.
**A4 (recent references):** We appreciate your suggestion and will include them in Sec.2. Given the vastness of the rationalization field, it's hard to cover all new papers in a non-survey paper. However, we've exhaustively discussed all important papers that pertain to our key research problems, i.e., feature correlation and degeneration.
**A5 (different baselines for Beer and Hotel):** Thank you for your question. We have now reimplemented Inter_RAT on Hotel. The results are in Table 1 of the **rebuttal.pdf**. Since INVRAT doesn't provide runable code and the details of how to create different environments is not very clear, we fail to reimplement INVRAT on other datasets. Another reason we didn't include INVRAT and Inter_RAT in Table 3 is that FR (published in late 2022) has been shown to outperform INVRAT and Inter_RAT a lot on BeerAdvocate (which is the main dataset used in INVRAT and Inter_RAT), so we thought comparing MCD to FR was somewhat enough.
**A6 (LLMs):** Here is a brief discussion of LLMs+XAI. With the great success of LLMs, a new research line for XAI is chain-of-thought. By generating (as opposed to selecting) intermediate reasoning steps before inferring the answer, the reasoning steps can be seen as a kind of explanation. However, LLMs sometimes exhibit unpredictable failure modes [B] or hallucinatory reasoning [C], making this kind of generative explanation not trustworthy enough in some high-stakes scenarios. Also, some recent research finds that LLMs are not good at extractive tasks [D-F].
We appreciate your suggestion, but delving further into the challenges of applying LLMs is somewhat beyond the scope of this paper. We will add it to the limitations and leave it as future work.
**A7 (would MCD be equally effective on other nlp tasks?):** It depends on the problem definition. As you know, different tasks can have different causal structures. We hope that the motivational process of MCD can inspire others to propose methods that work in other fields. However, we don't claim that our method is directly applicable to any tasks without modification.
[A] ZIN: When and How to Learn Invariance Without Environment Partition? NeurIPS 2022.
[B] Causal reasoning and large language models: Opening a new frontier for causality. arXiv:2305.
[C] Survey of hallucination in natural language generation. ACM Computing Surveys, 2023.
[D] Is chatgpt a general-purpose natural language processing task solver? arXiv:2302.
[E] Evaluating chatgpt’s information extraction capabilities: An assessment of performance, explainability, calibration, and faithfulness. arXiv:2304.
[F] A comprehensive capability analysis of gpt-3 and gpt-3.5 series models. arXiv:2303.
---
Rebuttal Comment 1.1:
Title: Further clarifications on the novelty and datasets.
Comment: **A1:** Dear reviewer, I'd like to further elaborate on the distinct novelty of our work in comparison to [4].
Before we begin, we'd like to emphasize that our primary contribution lies in the novel theoretical insights regarding rationale causality and the introduction of the MCD criterion as a replacement for MMI, rather than in a specific model (L105-109).
Below are some distinctions from the perspective of the proposed model:
While [4] focuses on post-hoc explanation, our study concentrates on self-rationalization. These are two distinct research areas. The primary challenge in post-hoc explanation is ensuring faithfulness (see L27-33 of our submission and Section 3 of [4]), which means the explanation should correspond closely with the model's prediction, $\hat{Y}$. This intuitively leads to aligning $P(\hat{Y}|X)$ with $P(\hat{Y}|X_Z)$. But in our paper, we are extracting rationales that are causal to the human-annotated **gold label $Y$**. Note that from a theoretical standpoint, even subtle differences can lead to very different results.
Very different from [4], the goal of our Equation 16 is designed to align $P({Y}|X)$ and $P({Y}|X_Z)$ (L252).
This necessitates minimizing the cross-entropies to approximate both $P({Y}|X)$ and $P({Y}|X_Z)$ via $P(\hat{Y}|X)$ and $P(\hat{Y}|X_Z)$, respectively (L253-257). The above motivations are all based on our causality analysis, which is central to our research.
[4] doesn't touch on causality analysis, and as a result there is no approximation for $P({Y}|X_Z)$ in [4].
Here are the ablation experiments that remove the approximation for $P({Y}|X_Z)$:
| Appearance | S | Acc | P | R | F1 || S | Acc | P | R | F1 ||S | Acc | P | R | F1 |
|:---:|---|---|---|---|---|-|---|---|---|---|---|-|--|---|---|---|---|
| MCD | 9.5 | 81.5 | 94.2 | 48.4 | **63.9** || 20.0 | 85.5 | 79.3 | 85.5 | **82.3** || 29.7| 86.7 | 59.6 | 95.6 | **73.4** |
| w/o $H_c(Y,\hat{Y}_z\|X_Z)$ | 9.7 | 72.5 | 85.1 | 44.6 |58.5 || 20.0 | 75.5 | 74.6 | 80.7 |77.5 || 32.4 | 81.2 | 54.6 | 95.5 |69.5 |
| Aroma | S | Acc | P | R | F1 || S | Acc | P | R | F1 || S | Acc | P | R | F1 |
|:---:|---|---|---|---|---|-|---|---|---|---|---|-|---|---|---|---|---|
| MCD | 9.9 | 87.5 | 84.6 | 53.9 | **65.8** || 19.3 | 88.4 | 65.8 | 81.4 | **72.8** || 29.6 | 90.2 | 46.1 | 87.5 | **60.4** |
| w/o $H_c(Y,\hat{Y}_z\|X_Z)$ | 11.4 | 86.3 | 70.2 | 51.6 |59.4 || 21.4 | 84.3 | 58.8 | 80.6 |68.0 || 30.4 | 88.4 | 40.4 | 78.8 |53.4 |
| Palate | S | Acc | P | R | F1 || S | Acc | P | R | F1 || S | Acc | P | R | F1 |
|:---:|---|---|---|---|---|-|---|---|---|---|---|-|---|---|---|---|---|
| MCD | 9.4 | 87.3 | 60.9 | 47.1 | **53.1** || 19.6 | 87.7 | 41.3 | 65.0 | **50.5** || 29.4 | 87.0 | 30.5 | 72.4 | **42.9** |
| w/o $H_c(Y,\hat{Y}_z\|X_Z)$ | 10.6 | 83.7 | 53.0 | 45.2 |48.8 || 20.5 | 85.2 | 37.3 | 61.4 |46.4 || 31.5 | 86.5 |25.4 | 64.4 |36.4 |
We see that when we remove the approximation, there is a significant drop in the F1-score, demonstrating the importance of our novel theoretical analyses.
We are grateful for your time and expertise in reviewing our paper. To better address your feedback, could you elaborate on the similarity concerns you raised?
**A2:** Here are some further clarifications about the ERASER datasets.
We think there is a misunderstanding. While ERASER is indeed a valuable resource, it may not always be the best benchmark for every research problem.
The primary strength of ERASER datasets is the inclusion of human-annotated rationales in the training set, making them particularly beneficial for supervised rationale extraction. To the best of our knowledge, the majority of methods that leverage ERASER datasets, including those you've mentioned ([7,8,9]), all require human-annotated rationales for training. In contrast, methods that focus on unsupervised rationale extraction, such as [1,2,5,10] that you mention, tend not to employ ERASER. Currently, the Beer dataset remains a predominant choice in this area. We opted for the Beer and Hotel datasets, aligning with the recent and strong baseline FR from NeurIPS 2022.
---
Rebuttal Comment 1.2:
Title: Response to rebuttal
Comment: Thanks for the response. But I still have the following questions:
**about A2**
I do not agree with the authors' explanation for not using the ERASER dataset. Since MCD should eventually be applied in practice, it should be validated on more datasets to verify the effectiveness of MCD. Moreover, the datasets in ERASER all contain true rationale labels and can be considered as the causal rationale.
**about A4**
I don't agree with the authors' statement "Given the vastness of the rationalization field, it's hard to cover all new papers in a non-survey paper". Some of the relevant work I mentioned in the review comments was done before 2022. For NeurIPS 2023, they are not new papers.
---
Reply to Comment 1.2.1:
Title: We are grateful for your feedback.
Comment: We are grateful for your feedback. Here are some further clarifications.
**A1 (further clarification on dataset selection):** We agree that the application is important. But as a research paper, the most important role of the experiments is to verify the theoretical claims, rather than to achieve the engineering SOTA. This is because different applications face different challenges, and different methods are aimed at solving different problems. We can choose different methods for different applications.
As implied in the previous rebuttal, the most appropriate dataset for verifying the ability to select causal rationales is still the BeerAdvocate, and the methods that are most relevant to our research (INVRAT, Inter_RAT, and FR) all use BeerAdvocate as their main experiment. Also, the datasets we chose are just the same as the strongest baseline FR (NeurIPS 2022).
There are two lines of research in rationalization: supervised rationalization and unsupervised rationalization, where "supervised" means that human-annotated rationales are required for training. As implied in the previous rebuttal, the primary advantage of the ERASER datasets is the inclusion of human-annotated rationales in the training set, making them particularly useful for supervised rationale extraction. To the best of our knowledge, the majority of methods that leverage the ERASER datasets, including those you've mentioned ([7,8,9]), are all supervised methods. However, our research is unsupervised rationalization, and the datasets we chose are those widely used by other unsupervised methods. Our datasets align with two recent papers DMR (AAAI2021) and FR (NeurIPS 2022). And we recently found that a recent ( June 25, 2023) published paper CR[A] also uses BeerAdvocate as the main experiment and also uses HotelReviews as a supplement, without using ERASER, which invalidates the usefulness of the BeerAdvocate and HotelReviews datasets we used.
**A2 (new references):** Thank you for your suggestion. We agree that some of the papers in the field of rationalization were not covered in the original submission, and we will discuss them in Sec.2 in the next version. What I mean by "it's hard to cover all new papers in a non-survey paper" is that we only discussed the papers that study similar research topics (i.e., feature correlation and degeneration) with us in detail, and ignored those sub-relevant papers, due to the page limit. We appreciate these valuable references and we will follow up on your suggestion to make a more comprehensive survey.
We appreciate your continued dedication and effort in evaluating our manuscript.
[A] Towards Trustworthy Explanation: On Causal Rationalization. ICML 2023. | Summary: This paper studies selective rationalization. Many methods use the maximum mutual information (MMI) criterion to find the most indicative rationale to explain a target label. As it has been shown in the past, this criterion is, by design, sensitive to spurious correlation. This paper proposes a novel criterion instead of "fixing" MMI, called Minimum Conditional Dependence (MCD). The authors identify two stages from which spurious correlations may come from and propose a causal framework to circumvent them.
Method
The paper assumes that the beer dataset has been generated on 3 aspects: Appearance, Taste, and Smell. This is wrong since the dataset has 5 aspects in total (palate and overall). Nevertheless, the motivating example is convincing. Leveraging the concept of d-separation is novel. However, it is unclear how dissimilar is the proposed approach w.r.t. [1,2] for example (L240 and L130 don't discuss really the differences) A downside of the approach is the need to train one model per aspect. It would interesting to discuss about extending the proposed model.
In the experiment section, I would appreciate having the results for the other aspects, and potentially, the original beer dataset to understand the effect of MCD (while not many papers utilize the original beer dataset, I do think it's important). The number of baselines is not consistent through the experiments: T3-T4 would require more models to assess the superiority of MCD.
Finally, I am surprised that MCD performs on average better rationalization performance but worst predictive performance, which defeats the purpose of "rationalization": explaining the output of a model. Why is there such a gap?
The related work is quite complete. Nevertheless, I have identified few missing citations [3-5].
1 Chang et al. 2021, Invariant Rationalization (ICML)
2 Yue et al. 2022, Interventional Rationalization
2 Chang et al. 2019, A Game Theoretic Approach to Class-wise Selective Rationalization (NeurIPS)
3 Antognini et al. 2021, Multi-Dimensional Explanation of Target Variables from Documents (AAAI)
4 Antognini and Faltings 2021, Rationalization through Concepts (ACL)
Strengths: - The causal framework is novel, and using d-separation is a nice replacement to circumvent MMI's flaws
- MCD obtains better rationalization performance
- The paper is well written and motivated.
Weaknesses: - It is unclear how dissimilar is the proposed approach w.r.t. [1,2] for example (L240 and L130 don't discuss really the differences)
- The proposed model only works for 1 aspect.
- MCD obtains worst predictive performance, which defeats the purpose of "rationalization": explaining the output of a model.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Why MCD underperforms in terms of predictive performance?
- How could we extend MCD to multi-dimensional rationales?
- Could you report results of MCD with BERT for the other experiments since it provides better performance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors do not discuss about the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your detailed review and constructive feedback on our paper!
**A1 (regarding [1,2]):** Our key differences with [1,2] are discussed in L66-85 of Sec.1 and App.C, where we delve into their limitations and theoretical flaws. [1] employs IRM and [2] uses backdoor adjustment to add regularizers to the MMI loss. Our approach is not inspired by either IRM or backdoor adjustment. We bring new theoretical insights linking causality to our model-agnostic MCD criterion, potentially benefiting other methods. Moreover, our results surpass those of [1] and [2] by substantial margins, with improvements of 19.1 and 13.1, respectively, averaging over nine settings in Table 2's F1 scores.
**A2 (only works for 1 aspect):** Could you please provide a clearer delineation for the term "1 aspect"? We see that MCD outperforms all baselines in terms of rationale quality across all aspects on the standard BeerAdvocate and HotelReview benchmarks in Table 2-3.
**A3 (worst predictive performance):** Our analysis suggests otherwise, could you elaborate on your concerns? In the nine settings of BeerAdvocate, our MCD secured the highest accuracy in four cases, was only marginally behind the best FR in another four, and matched FR in the remaining one. Moreover, our results consistently outperformed INVRAT. On the standard Beer and Hotel datasets, our accuracy drop, compared to the best method, was at most 0.9%. Overall, when averaged across aspects or sparsities, our performance matches or exceeds the best baseline FR.
The accuracy of FR and MCD on BeerAdvocate (Table 2):
||S|Apearance|Aroma|Palate|Average
:-:|:-:|:-:|:-:|:-:|:-:
FR|10|75.8|87.7|87.9|83.8
MCD|10|81.5|87.5|87.3|**85.4**
:-:|:-:|:-:|:-:|:-:|:-:
FR|20|84.6|89.3|88.2|**87.4**
MCD|20|85.5|88.4|87.7|87.2
:-:|:-:|:-:|:-:|:-:|:-:
FR|30|86.4|88.1|87.0|87.2
MCD|30|86.7|90.2|87.0|**87.9**
:-:|:-:|:-:|:-:|:-:|:-:
FR|average|82.3|88.4|**87.7**|86.1
MCD|average|**84.6**|**88.7**|87.3|**86.9**
**A4 (Why lower predictive performance):** I guess you are referring Table 4 to say that MCD gets the worst predictive performance. The synthetic experiments of Table 4 are crafted to demonstrate MCD's efficacy against degeneration. Note that degeneration might not adversely affect predictive accuracy; sometimes it can even improve it. In Table 4, as the rationale quality decreases with increasing skew, the accuracy of RNP conversely rises (in the settings of Table 4, the explainer is specially initialized to select trivial patterns to indicate the label. If the predictor fits such patterns, the eplainer and predictor can collude to get high accuracy. But if the predictor fits the true semantics, the accuracy will not be as high). However, such elevated accuracy might be unstable and drop with shifts in trivial patterns. The accuracy of our MCD, though lower than that of RNP and FR, is more aligned with the rationale quality, indicating less degeneration.
Notably, a recent study claims that relying on causal features doesn't always guarantee optimal accuracy [A].
**A5 (extend MCD to multi-dimensional rationales):** While it's typical in current methods to train a model for each aspect, we agree that a consolidated approach is desirable. Research in this direction, such as the valuable references you provided, offers promising techniques. We're contemplating leveraging their insights to refine MCD for multi-dimensional rationale extraction. However, this endeavor remains a future aspiration, and is somewhat beyond the scope of this paper.
**A6 (more results with BERT):** We are sorry, but due to limited GPU resources, only six of the nine experiments from Table 2 have been completed in time, and the results are shown in Table 3 of the rebuttal.pdf.
Most prior methods underperform when using BERT. The fine-tuning intricacies of BERT make it difficult to verify specific reasons for performance improvements. Thus, we primarily employed GRUs to validate MCD's efficacy. Also, due to resource limitations, we weren't able to conduct all experiments with BERT. We plan to address this in future work and will note this limitation in our paper. Thank you for your suggestion.
**A7 (original beer):** The experiments in Table 2 are run on the non-decorrelated dataset. We now report the results for the taste aspect (since all other aspects can serve as a causal part of the overall label, we don't include the overall aspect) in Table 2 of the **rebuttal.pdf**. (We do not reimplement INVRAT on this aspect, and the reasons are in **A9**).
**A8 (references):** Thank you for providing us such valuable references. We find they cover very interesting topics that we hadn't considered. We will discuss them in Section 2 and consider drawing inspiration from them to expand our approach to multi-dimensional rationales in the future.
**A9 (baselines in T3-4):** Thank for your suggestion. We have now reimplemented Inter_RAT and added it to Table 3. The results are in Table 1 of the **rebuttal.pdf**. Since INVRAT doesn't provide runable code and the details of how to create different environments are not very clear, we fail to reimplement INVRAT. Another reason we didn't include INVRAT and Inter_RAT in Table 3 is that FR has been shown to outperform INVRAT and Inter_RAT a lot on BeerAdvocate (which is the main dataset used in INVRAT and Inter_RAT), so we thought comparing MCD to FR was somewhat enough.
Table 4 works more as an ablation study (where feature correlations are ablated) to verify the effectiveness in addressing degeneration. FR is specifically designed to address degeneration and has achieved SOTA results in that direction. We compare our MCD with vanilla RNP to validate its effectiveness and with FR to assess its competitiveness. The competitiveness of our MCD in real-world scenarios is primarily validated by Table 2-3, rather than Table 4.
[A] Spuriosity Didn't Kill the Classifier: Using Invariant Predictions to Harness Spurious Features. arXiv:2307.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for taking the time to review our paper! With best wishes to you and yours! | Summary:
The maximum mutual information criterion is commonly used for rationalization, but it uncovers associations rather than causal relationships. The authors propose to identify non-causal features that are independent of the labels given the causal features and the ‘minimum conditional dependence’ criterion, which does not require prior expert knowledge. Experiments are focused on the popular BeerAdvocate and HotelReviews datasets, though mostly on the former, and obtain very competitive results against the other, well-chosen baselines.
Strengths: - The treatment of this problem using causal terminology is appropriate and interesting, and attempts were made to formalize some of these concepts (in Sec 4.2), with only some challenges, listed below.
- Appropriate baselines and datasets are provided in the experiments, as per other related (uncited) work. Moreover, attempting to overcome limitations of ‘some of the baseline methods’ (L290) by establishing consistent ‘settings’ is good, although what those settings are (beyond using GloVE) should be included.
- The rates of improvement in Table 2 are impressive. In some cases, tests of statistical significance would be useful to include, at least against FR, though apparently not necessary.
Weaknesses: - The treatment of related works is extremely superficial in Section 2. Approximately half of the section on rationalization is merely a list of papers barely described by their topics and summarized in whole as being ‘orthogonal’. To some extent, a literature survey was better covered in Section 1,
- There are various minor issues including:
- atypical English (L2 “pieces of their inputting texts”; L48 “is easily to be affected”; L56 “[lowercase] comments regarding”; L188 “a image”)
- very small text (Fig 1)
- incomplete citations (e.g., Yue et al (2023), L514.)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - This seems like a companion piece to “MGR: Multi-generator Based Rationalization” by Liu et al (2023), as much of the text is identical and they also deal with spurious correlation and degeneration, but the focus is on the MCD criterion rather than using multiple generators. What other differences can easily be described in your paper, if this paper were cited?
- Is your ‘derivation’ in appendix B1 really an example of Bayes’ theorem, or just an application of marginalization and the chain rule?
- Assumption 1 (L223) should be better explained in Appendix B, as the ‘temporal sequence’ explanation provided in the paper is insufficient, especially if it refers to some aspect of dataset acquisition or prediction rather than the actual state of the world. Can you expand on this?
- the FR method (L274) is claimed to be the SotA, but even the paper by Liu et al (2022) to which it refers does not make a compelling case for it being the true SotA — how is this claim proven conclusively?
- Given the new methodological approach, it would be interesting to include some study of the required computational resources in the main body of the text, beyond the brief mention of RTX3090 and appendix A.5’s epochs, including possible ablations. Would that be possible?
- Can you add an explanation (or rationale) for why fine-tuning BERT would be ‘challenging’ (L299) beyond just providing citations and some indication of potential overfitting (section beginning L331)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - The last paragraph of Sec 6 touches briefly on the limitations. It is suggested that not computing the ‘precise values’ of causal effects would be a limitation, although it is not clear why that would be. Various other forms of limitation, including the relatively restricted datasets (in terms of task, scope, and number) used in the experiments or the focus on text, could also be addressed, for example.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for dedicating your time and expertise to review our paper. Your insightful comments and suggestions are highly valued and appreciated.
**A1 (related work):** Thank you for pointing out this issue. Due to page limitations, we foucsed mainly on papers closely tied to our work, especially those that investigate spurious correlations. However, we agree with you that the first paragraph of Section 2 was a bit too brief. We will expand this content in the next version to provide a more complete understanding for new readers outside this field.
**A2 (minor issues):** Thank you for your observation. We'll ensure to thoroughly revise and proofread the document in our next iteration to correct any minor issues. And the settings (Strengths 2, L290) include word embedding (GloVE), networks (GRUs), and similar sparsities (10%, 20%, 30%).
**A3 (regarding MGR):**
- First, the problem identification. Although MGR can address spurious correlation and degeneration simultaneously with only one model, in the context of MGR, spurious correlation and degeneration are considered as two separate problems and treated with distinct theoretical analyses. However, our MCD is the first to unify these two problems into one: they both arise from the non-causality of MMI. As a result, we can better understand these two problems with a unified theoretical insight.
- Second, the potential implications. The core idea behind MGR to mitigate the influence of spurious correlations is the central limit theorem, which involves using more generators to reduce the probability of missing causal rationales. However, the model's lack of flexibility makes it less compatible with other rationalization variants. Our MCD, an optimization criterion, is model-agnostic and more versatile. Besides, we hope that the motivational process of our MCD can inspire research in other causal discovery fields, such as causal representation learning.
- Third, the model complexity. MGR relies on the central limit theorem and involves multiple generators, which requires more computational resources than other methods.
**A4 (derivation in App.B1):** It's just an application of marginalization and the chain rule. Thank you for your careful observation, and we will change the statement accordingly.
**A5 (regarding Assumption 1):** As far as we know, most of the real-world datasets are built in a collecting-annotating form. In such a form, $Y$ is given according to $X$, and the annotators won't edit $X$ after giving $Y$. So, Assumption 1 holds.
We really appreciate your insight very much, which inspires us to think of some cases that might break Assumption 1. One is the synthetic data like ColorMinist. In ColorMinist, a human first annotates an image, and then edits the image again according to the assigned label. Another scenario is the collection of time series data, where annotators label the data based on existing information and then adjust the data collection method according to the previous labels. This creates a cyclic causal graph. However, in the literature of causal inference, most researchers only consider acyclic graphs. Nevertheless, we greatly appreciate your insight a lot and will add the discussion to the Limitations section in the next version.
We note that in cases where Assumption 1 doesn't hold, we still have Lemma 1, i.e., D-separation severs as a sufficient condition for selecting causal rationales. When assumption 1 holds, it becomes a necessary and sufficient condition.
**A6 (FR SOTA):** FR was introduced in late 2022 and showed superior performance over recent baselines DMR (AAAI 2021) and A2R (NeurIPS 2021) on both the Hotel and Beer datasets. Our own tests also showed that FR outperformed INVRAT and Inter_RAT in most scenarios on the correlated Beer dataset (Table 2). Therefore, we suggested it as a current SOTA. We acknowledge this might not be entirely accurate, and we'll modify this statement in our revised version.
**A7 (computational resources):** Thank you for your valuable suggestion. Taking the appearance aspect of BeerAdvocate as an example, the computational resources costed by different methods are:
| |batchsize | lr | epochs | memory(MB) | RTX3090 hours |
:-:|:-:|:-:|:-:|:-:|:-:
FR|256|0.0001|300|3504|0.21
Inter_RAT|256|0.001|20|3660|2.34
MCD|128|0.0001|150|2630|0.28
Since the memory usage is affected by the batchsize, we further assign our MCD with a batchsize of 256 (the same as FR and Inter_RAT), and the memory usage then becomes 3806 MB.
Since our approach is straightforward and does not introduce additional regularizers, we have not identified specific modules that require dedicated ablation. However, Table 4 is a synthetic experiment crafted to demonstrate MCD's efficacy against degeneration, it can be considered somewhat of an ablation study (where feature correlations are ablated). In the future, we will consider integrating our MCD criterion with other sophisticated methods and conducting appropriate ablations.
**A8 (why bert doesn't work well):** This could be a combination of several complex reasons. Here are some possible factors: First, fine-tuning large models is challenging in itself, especially when there is not much data available. Second, most of previous methods are very sensitive to hyperparameter tuning (implied in the A2R paper), which could lead to training instability and result in a higher likelihood of getting stuck in local optima. Third, the more powerful the explainer is, the more easier degeneration will be (implied in a recent paper [A]).
We agree that it would be valuable to explore specifically what happens with BERT It's somewhat beyond the scope of this paper, and we leave it as future work.
**A9 (Limitations):** We appreciate your suggestions, and we will certainly expand on these points and discuss these limitations in more detail in the next version.
[A] Unsupervised Selective Rationalization with Noise Injection. ACL 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you. These are reasonable responses, and I still am comfortable with my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review our paper! With best wishes to you and yours! | Summary: This work focuses on the problem of rationalization that aims to extract explanatory sentences that serve as rationales along with training a predictor for the downstream task. Prior work on rationalization has typically employed the maximum mutual information (MMI) criterion. However, this criterion does not uncover causal relationships and latches on to feature correlations and degenerations. The key insight in this work is that non-causal features are independent of the target labels given the causal features, based on which the authors propose a minimum conditional dependence criterion to discover causal rationales.
Strengths: - Paper is written well.
- Offer a unified perspective of feature correlations and degenerations (described in prior work on rationalization).
- Propose a new MCD criterion to identify causal rationales.
- Outperform existing MMI-based rationalization techniques on two multi-aspect sentiment classification tasks.
Weaknesses: One of the main contributions of this work is that the proposed MCD criterion leads to the discovery of causal rationales. It would be useful for the reader to see some examples of generated rationales via MCD, and also highlight failure cases when MCD fails to retrieve the causal rationales.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why is Equation 16 a good approximation? Define \Omega(M) in Equation 16.
- In Table 5, why are the F1 scores of FR-ELECTRA so poor on the Appearance and Palate aspects?
- Some anecdotal examples of causal rationales that were identified via MCD as opposed to MMI would be useful for the reader.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Some limitations have been listed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you deeply for taking the time to thoroughly review our paper. We are truly grateful for the insights and recommendations you've provided.
**Q1:** It would be useful for the reader to see some examples of generated rationales via MCD, and also highlight failure cases when MCD fails to retrieve the causal rationales.
**A1:** We appreciate your suggestion a lot. We have now added some examples and failure cases in Figure 1 and 2 of the rebuttal.pdf.
In the example in Figure 1, the causal rationale is the text describing the good aroma of the beer. We see that RNP selects only the text describing the Palate aspect. Inter_RAT selects part of Aroma, but also Palate (e.g., "felt very smooth"). FR also selects both Aroma and Palate. Our MCD selects only Aroma.
In the failure case in Figure 2, we observe that MCD effectively identifies simpler sentiment-oriented descriptions related to aroma but struggles to recognize higher-level logical reasoning. For instance, expressions like "the first one that has been" often carry evident emotional inclinations, but such phrases are not selected by MCD due to their reliance on common sense and logical reasoning.
**Q2:** Why is Equation 16 a good approximation? Define \Omega(M) in Equation 16.
**A2:** As you know, the cross-entropy can be expanded as follows:
$H_c(Y,\hat{Y}|X)=H(Y|X)+D_{KL}(P(Y|X)||P(\hat{Y}|X))$.
When we get the minimum cross-entropy $H_c(Y,\hat{Y}|X)$, we have $D_{KL}(P(Y|X)||P(\hat{Y}|X))=0$, which means $P(Y|X)=P(\hat{Y}|X)$. In this case, $P(\hat{Y}|X)$ is a good approximation for $P(Y|X)$. And it's the same for how $P(Y|X_Z)$ is approximated by $P(\hat{Y}|X_Z)$.
As for $\Omega(M)$, it's defined in Equation 4 (Line 154). We appreciate your suggestion and agree that it will be good for the readers if we imply it again in Equation 16, and we will add it in the next version.
**Q3:** In Table 5, why are the F1 scores of FR-ELECTRA so poor on the Appearance and Palate aspects?
**A3:** Thank you for your question. In fact, we think we should approach this phenomenon from the opposite perspective, that is, why FR performs well on the Aroma aspect. As shown in Table 6, we see that most previous methods perform very poorly when conducted with over-parameterized BERT. On the Appearance aspect, our reimplementation of FR is better than the one reported by FR itself (the results in Table 6 are reported by FR itself).
So why can FR perform well on the Aroma aspect? We are not sure, but we note that a recent paper called CR [A], publised on ICML 2023, shows similar results:
| F1 | Appearance | Aroma | Palate |
|:-----------:|:----------:|:-----:|:------:|
| CR-BERT | 28.0 | 39.0 | 26.5 |
| FR-ELECTRA | 18.0 | 56.7 | 11.3 |
We see that CR also performs well on Aroma, but poorly on Appearance and Palate. The reason may be that Aroma is a relatively easy aspect and the other two aspects are relatively harder. However, a more precise understanding of the underlying mechanism may be highly complex, which to some extent goes beyond the scope of this paper.
[A] Towards Trustworthy Explanation: On Causal Rationalization. ICML 2023.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks to the authors for their detailed response (including new results) and clarifications to my questions. I'm raising my score to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review our paper! With best wishes to you and yours! | Rebuttal 1:
Rebuttal: We are deeply grateful to every reviewer for their in-depth analysis and constructive feedback on our manuscript.
Here are the figures and tables of some of the new experimental results, attached to rebuttal.pdf.
Pdf: /pdf/0bbde21d2c879c5200edfce7993604988dd25321.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Let the Flows Tell: Solving Graph Combinatorial Problems with GFlowNets | Accept (spotlight) | Summary: This paper suggests a new graph combinatorial optimization method by leveraging a generative flow network (GFlowNet) and other novel technics to improve it. Maximum independent sets (MIS) and their variants are very important problems as they can be applied to several high-impact tasks such as network communication. As these problems are, NP-hard exact method cannot solve them within a reasonable time, so that approximated solver such as a deep learning policy can support the production of near-optimal solutions quickly. This work follows this research trends and makes state-of-the-art over the deep learning baselines.
A major contribution is the usage of GFlowNet rather than PPO because exploration of GFlowNet can cover the symmetric nature lies in combinatorial solution space using DAG-based MDP construction. The novelty of this work is the improvement of GFlowNet for long trajectory generation of combinatorial optimization, so this work proposed (1) transition-based training and (2) intermediate learning signal training, which is a simple yet intuitive idea.
While this work is concrete and interesting, I have several questions and concerns; please address these concerns in the rebuttal discussion phase.
I just wrote the questions below.
Strengths: This work is novel, and its performance seems to be promising. First, they cleverly leverage existing Markov decision process formulation of learning what to defer (LwD), as the state is described as $s \in \{0,1,2\}$ where 0 is included, one is excluded, and two is deferred; please make clear spotlight that this MDP idea is from prior work of LwD [1].
Second, they leveraged GFlowNet rather than PPO so that it could consider the symmetric nature of combinatorial solution space.
Third, their additional techniques are widely studied things in GFlowNet research (e.g., the intermediate reward is similar to Generative Augmented Flow Network [2], and the transition-based method is identical to sub-trajectory balance [3]) but made intuitive variations for combinatorial optimization.
[1] Ahn, Sungsoo, Younggyo Seo, and Jinwoo Shin. "Learning what to defer for maximum independent sets." International Conference on Machine Learning. PMLR, 2020.
[2] Pan, Ling, et al. "Generative augmented flow networks." arXiv preprint arXiv:2210.03308 (2022).
[3] Madan, Kanika, et al. "Learning GFlowNets from partial episodes for improved convergence and stability." arXiv preprint arXiv:2209.12782 (2022).
Weaknesses: One can be said that technical novelty is limited as every technique are already actively explored; I think making the novel combination of existing technics is also significant.
I think the description of many ideas follows existing trends of GFlowNet; I enjoyed reading this paper, but maybe some who don't know about GFlowNet work can miss many detailed parts of this technique. Please make an explicit description of each technique and give a reference idea of where you are inspired by and what is the major difference, e.g., sub-trajectory balance vs. transition-based learning (assume that reader also not familiar with the sub-trajectory balance).
Finally, there are several neural combinatorial optimizations works [1,2] that study the symmetric nature of combinatorial optimization. I humbly suggest to include as a reference paper.
[1] Kim, Minsu, Junyoung Park, and Jinkyoo Park. "Sym-nco: Leveraging symmetricity for neural combinatorial optimization." Advances in Neural Information Processing Systems 35 (2022): 1936-1949.
[2] Kim, Hyeonah, et al. "Symmetric Exploration in Combinatorial Optimization is Free!." arXiv preprint arXiv:2306.01276 (2023).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. How does diversity (e.g., the edit distance between sampled solution) changes when your new techniques are applied? (to compare with another method, including the trajectory balance method)?
2. What kinds of post-processing methods are used for solution generation? Did you samples multiple samples from the learned sampler? Then what is the actual sampling width? How about applying local search just as learning what to defer (LwD) [1] ?
3. To compare with LwD, did you put 2opt for LwD following their original paper?
4. Can this method extend to other combinatorial optimization, such as the traveling salesman problem (TSP)?
[1] Ahn, Sungsoo, Younggyo Seo, and Jinwoo Shin. "Learning what to defer for maximum independent sets." International Conference on Machine Learning. PMLR, 2020.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: GFlowNet is an on-going framework having a lot of limitations. Please make explicit limitations on the main paper for future researchers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Please make an explicit description of each technique and give a reference idea of where you are inspired by and what is the major difference, e.g., sub-trajectory balance vs. transition-based learning (assume that reader is also not familiar with the sub-trajectory balance).
We do not use the sub-trajectory balance objective in this work. Sub-trajectory balance is to take sub-trajectory (could be longer than a single transition but also could be shorter than a complete trajectory) to construct training losses. On the other hand, transition-based learning is only applicable to detailed balance based methods (including detailed balance and forward-looking detailed balance). It has the same expected gradient as trajectory-level detailed balance training, but just use the formula in line 238 rather than Eq. 5 to make the optimization more efficient. We will include the discussion into the final version.
> Finally, there are several neural combinatorial optimizations works [1,2] that study the symmetric nature of combinatorial optimization. I humbly suggest to include as a reference paper.
[1] Kim, Minsu, Junyoung Park, and Jinkyoo Park. "Sym-nco: Leveraging symmetricity for neural combinatorial optimization." Advances in Neural Information Processing Systems 35 (2022): 1936-1949.
[2] Kim, Hyeonah, et al. "Symmetric Exploration in Combinatorial Optimization is Free!." arXiv preprint arXiv:2306.01276 (2023).
Thanks for the references! We will make sure to include them in the final version.
> How does diversity (e.g., the edit distance between sampled solution) changes when your new techniques are applied? (to compare with another method, including the trajectory balance method)?
In the MIS small scale benchmark, we test the diversity of different GFlowNet variants. For each graph configuration in the test set, we let each algorithm generate thirty solutions and compute their average pairwise distance. The diversity is the mean distance averaged across the test set. The distance is computed as the Jaccard distance between two solution vertex sets. The diversity of trajectory balance, detailed balance (trajectory level), detailed balance (transition level), forward-looking detailed balance (trajectory level), and forward-looking detailed balance (transition level) is $0.714$, $0.572$, $0.637$, $0.505$, and $0.618$, respectively.
> What kinds of post-processing methods are used for solution generation? Did you samples multiple samples from the learned sampler? Then what is the actual sampling width? How about applying local search just as learning what to defer (LwD)?
Yes, we sample multiple times from the learned sampler and do not use any further post-processing technique; see Appendix C for details. We also agree that applying local search with GFlowNet solutions will definitely be a promising future direction for this.
> To compare with LwD, did you put 2opt for LwD following their original paper?
LwD does not provide code for local search; also in the LwD paper, only part of the experiments are done with 2opt. To ensure a fair comparison, we also do not put 2opt after GFlowNet and compare it with other baselines including LwD without 2opt.
> Can this method extend to other combinatorial optimization, such as the traveling salesman problem (TSP)?
TSP problems are one of the future directions where we hope to see the advantage of GFlowNets on this family of combinatorial optimization problems. For example, the action of GFlowNet could be to choose the next vertex to visit (for the traveling salesman). The architecture of policy and flow could be a transformer like in previous approaches [1].
[1] Attention, Learn to Solve Routing Problems!
---
Rebuttal Comment 1.1:
Comment: Thank you for your timely response. My concerns are resolved. Also, I find the extension of GFlowNet into TSP through the utilization of the AM [1] particularly intriguing. I uphold a positive evaluation of this work, as it delivers a pivotal initial contribution to the realm of combinatorial optimization within the framework of GFlowNet.
[1] Attention, Learn to Solve Routing Problems!
---
Reply to Comment 1.1.1:
Title: Thank you to Reviewer PEdA!
Comment: Thank you for the kind reply.
We appreciate the reviewer's recognition of the novelty and promising performance of our approach. We would also like to thank the reviewer for the time and effort in reviewing our work and considering our rebuttal, which has greatly contributed to the improvement of our paper! | Summary: This paper presents a new approach for learning to solve graph combinatorial optimization by GFlowNets. The GFlowNets (or broadly, the generative models) own the advantage of discovering multiple (near)-optimal solutions, which is better than standard RL or SL. The authors modify GFlowNets to work with larger-scale problems in CO and provide extensive experiments on 4 different problems.
---------------------
Post-rebuttal: I appreciate the authors for the feedback and I am now more positive about this paper.
Strengths: * This paper is well-written.
* The motivation for introducing GFlowNets to CO is sound and interesting.
* The authors made modifications to GFlowNetw to fit larger-scale CO problems.
Weaknesses: * My major concern of this paper is about the selection of baselines and the implementations in experiments.
* The authors mentioned in L55 that "there are attempts to fix these issues (Kwon et al., 2020; Ahn et al., 2020)" from the RL literature, but these methods are not implemented and compared in experiments.
* Why the selection of baselines are different for MIS and the other 3 problems, especially considering that MIS and Max clique are complement?
* What will the performance be like if we let Gurobi run the same amount of time compared to your GFlowNet solvers?
* During inference, do you implement a search algorithm after the GFlowNets? What about the other neural solver baselines?
* How do you implement, configure and tune the baselines? Do you implement the same GIN as your implementation of GFlowNets? For example, from my knowledge, PPO's performance is strongly dependent on hyperparameter tuning and the configuration of tricks. Besides, the probabilistic method (Karalias & Loukas, 2020) has two types of configurations (fast/accurate) and which one do you choose?
* I also notice you follow some recent neural CO solvers such as Qiu et al. (2022, DIMES) and Sun & Yang (2023, DIFUSCO) but do not consider them in main experiments. Can you offer any justifications?
* The training process does not seem clear to me. Is the training objective to minimize $l_{DB}+l_{TB}$? How is the reward being used? Seeing that GFlowNet is quite a new framework, we should not expect the readers to be that familiar with the domain knowledge.
* Some details seem to be missing:
* What is the meaning of "MIS size" in Fig 4? On which dataset did you get the numbers in Fig 4?
* Other minor issues:
* The notation of $P_F^\top$ is improper. As I understand, $\top$ does not mean "transpose" here and causes confusion to readers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the questions in the "weaknesses" part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The authors mentioned that "there are attempts to fix these issues (Kwon et al; Ahn et al.)" but these are not compared in experiments.
Please note that we have compared with Ahn et al, which is the “PPO” in Table 1. The Kwon et al. work is designed for routing problems (e.g., TSP) which are different from the problems we study in this work, thus we don't include it. That said, we will definitely take routing problems as one of the future directions where we hope to see the advantage of GFlowNets on this family of combinatorial optimization problems.
> different baselines between MIS and others
Our choice of baselines was guided by a desire to provide a comprehensive comparison across a diverse set of methods. For the MIS problem, we have already included a substantial number of baselines in our benchmark. Notice that most baselines are specialized for MIS and not applicable for other tasks. For the other problems, we follow the benchmark that is used by [1], aiming to include a broader range of baselines, particularly those from the unsupervised learning domain.
Regarding the complementarity of the MIS and Max Clique problems, we agree with your point. To cover a diverse set of baselines on max clique, we add an experiment on the twitter dataset following Sec. 4 of [2] and [3]. In this benchmark, the methods of [2], [3], [4], and GFlowNet achieve average results of $0.926$, $0.924$, $0.987$, and $0.992$. This demonstrates the excellence of GFlowNet performance. Our goal was to cover a diverse set of baselines within our limited time. By comparing these, we provide a more comprehensive evaluation.
[1] Annealed Training for Combinatorial Optimization on Graphs
[2] Unsupervised Learning for Combinatorial Optimization with Principled Objective Relaxation
[3] Erdos Goes Neural:an Unsupervised Learning Framework for Combinatorial Optimization on Graphs
[4] Graph Neural Networks for Maximum Constraint Satisfaction
> run Gurobi in limited time
We do experiments on running Gurobi on large scale MIS task with limited time budget. We give a time budget that is slightly larger than GFlowNet to Gurobi, and it obtains the average size of $34.81$, smaller than $37.48$ of GFlowNet. Therefore, Gurobi obtains worse performance compared to GFlowNets under the same time budget constraint.
> a search algorithm during inference?
We do not implement any search algorithm based on GFlowNets (see the evaluation part of Appendix C). That being said, it will be a very promising future direction to explore to further boost the performance of GFlowNet solvers. The Intel and DGL baselines are built on a tree search algorithm, while other baselines do not use search algorithms. Thus it's fair to say that GFlowNet achieves better performance.
> How do you implement, configure and tune the baselines?
For different methods, we use the same evaluation and hyperparameter selection protocols to make sure the comparison is fair (see Appendix D). For RL baseline we stick to configurations in [1] to ensure that we are not “creating” a worse baseline that is different from the original previous work.
[1] Learning What to Defer for Maximum Independent Sets
> probabilistic method has two configurations... which one do you choose?
We implement the conditional sequential decoding proposed in Erdos [1], which is slower but more accurate than directly doing monte-carlo sampling (see Sec. 3.3 in [1]). Therefore, it is fair to say that our comparison is reasonable.
[1] Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs
> ... some recent neural CO solvers but do not include them
That isn't true. We do include their methods in Appendix D to show the advantage of GFlowNets especially on large scale tasks. In the main text we don't include them as they don't use the same graph simulation protocol as ours and we also got bugs when trying to reproduce their results on our data. During the week after main text submission deadline, we did experiments on ER graphs with GFlowNets and thus could compare with them, so we put these results into Appendix D. The results show the advantage of GFlowNet. Further, our approach is a purely unsupervised method while theirs rely on precomputed solutions by the solver. Thus it is not fair to compare unsupervised methods with supervised methods. On the other hand, this means GFlowNet uses less information but still achieves better results.
> Is objective $\ell_{TB} + \ell_{DB}$? How is the reward used?
No; in fact, the forward-looking (FL) objective in Eq. 6 is a modification of the detailed balance objective with (one can see the similarity between Eq. 2 and Eq. 6) that uses cumulative reward information in the parametrization of the state flow. For DB-based methods, the reward is used as the state flow of the last state in a complete trajectory. We will make sure to specify the usage of reward information in the final version of the paper.
> What is "MIS size" in Fig 4? On which dataset to get Fig 4?
"MIS size" is the size of the vertex set that an algorithm obtains for a maximum independent set task. The larger the size, the better the method. More details about evaluation metrics are written in the paragraph starting from line 313. As specified in line 338, we conduct experiments on a small scale simulated RB graph dataset to produce Fig. 4.
> The notation of $P_F^{\top}$ is improper. As I understand, $\top$ does not mean "transpose" here and causes confusion to readers.
Here $\top$ is for “terminating” and $P_F^{\top}$ is the distribution of terminating states induced by GFlowNet’s forward policy $P_F$ (line 88). This notation is borrowed from past work on GFlowNets and we will replace the improper notation.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the feedback. I believe the quality of this paper will be greatly improved if the authors clarify the aforementioned details and include new results in future revisions. I adjust the score to 5.
---
Reply to Comment 1.1.1:
Title: Thank you to Reviewer VHLn!
Comment: Thank you for increasing the score.
We appreciate the reviewer's recognition of the sound and interesting motivation behind our paper, as well as our careful and novel design of the approach, and the acknowledgment of the quality of the writing.
We would also like to thank the reviewer for the time and effort in reviewing our work and considering our rebuttal, which has greatly contributed to the improvement of our paper! | Summary: This paper leverages generative flow networks (GFlowNets) to obtain diverse solution candidates for combinatorial optimization problems on graphs without expert supervision.
GFlowNets were recently introduced as a way to sample structured objects $\mathbf{x}$ with a likelihood $P(\mathbf{x}) \propto R(\mathbf{x})$ that is proportional to a terminal reward function (see _Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation_, Bengio et al. 2021).
This is done by casting the sampling as a Markov Decision Process, whose policy is defined by neural networks and trained with a suitable loss to enforce flow balance conditions.
In the study at hand, the reward is chosen as a Gibbs probability distribution $R(\mathbf{x}) = \exp(\mathcal{E}(\mathbf{x}) / T)$, whose temperature controls solution diversity by specifying the acceptable deviation from the global optimum.
Parametric policies are constructed conditionally on the graph, to allow for generalization.
MDPs are designed in a problem-specific manner, by iteratively constructing sets of vertices that satisfy certain constraints.
Since the MDP trajectories can be quite long, two tricks are suggested to improve training efficiency and credit assignment:
- sample random subsets of transitions instead of a full trajectory
- incorporate intermediate reward signals based on partial solutions
These strategies are implemented and benchmarked against exact solvers, as well as recent ML-based approaches.
Strengths: ### Originality
This paper mixes several existing ideas into a coherent whole:
- GFlowNets and their loss functions
- viewing optimization as probabilistic inference
- MDPs for combinatorial problems
- strategies for credit assignment
This unseen combination, along with extensive numerical experiments, is sufficient novelty in my view.
### Quality
The authors demonstrate deep knowledge of the literature on GFlowNets, and the theoretical part is well supported.
Thorough benchmarks suggest their method performs better than its counterparts, although some comparisons may be biased (see below).
I especially appreciated the care given to make the benchmarks transparent and fair (aligning with recent papers, avoiding easy instances, reimplementing and retraining algorithms from scratch to compare them on the same basis).
Additionally, a very welcome ablation study shows the limited impact of hyperparameter settings.
In a nutshell, this is good, serious science.
### Clarity
The writing style is clear and nicely complemented by explanatory pictures.
A lot of time is devoted to the prerequisites on GFlowNets and combinatorial problems, as well as a thorough literature review.
### Significance
I believe this contribution can inspire further developments, since the use of GFlowNets for combinatorial optimization seems well-justified and practically successful.
Weaknesses: ### Originality
To me, it appears that the main novel idea in the paper is the application of GFlowNets to learning combinatorial problems (which is already a worthy one).
The idea of training from subsampled transitions instead of a full trajectory is also new.
However, I would be very surprised if the MDPs presented by the authors were absent from the previous literature, especially given the standard nature of problems such as Maximum Independent Set.
Perhaps there are some necessary adjustments that are necessary for these MDPs to work with GFlowNets, in which case I will welcome clarification!
### Quality
I am not confident enough to comment on the choice of algorithms in the MIS benchmark, but I have an issue with the 3 non-MIS benchmarks, which seem a bit unfair to me.
A major asset of GFlowNets is their sequential construction of the solution, which the authors mention as the reason for their success.
On the other hand, the benchmarked competitors ERDOS and ANNEAL rely on a hypothesis of independence between vertices of the graph, which explains their fast inference.
In light of this, I argue that it would make sense to benchmark GFlowNets against other methods (typically from the RL literature) that also adopt a sequential perspective, to see if the advantage still holds.
### Clarity
n.a.
### Significance
To underline the impact of the paper, it would be helpful to explain why a diverse set of solution candidates is important in practice.
As a combinatorial optimizer myself, it is not obvious to me: when should I settle for a set of diverse, but possibly suboptimal solutions?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: L61: What is new in your MDP design compared to existing formulations?
L105: Where do the final rewards appear?
L242: I thought the whole point was to make epochs faster?
L255: Are there cases where this reward continuation is not applicable?
L640: Why not use a linear formulation? What formulation did you use for the three non-MIS problems?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors do not really discuss the limitations of their approach, and more reflection on that would be welcome.
Societal impact is not relevant in this case.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > To me, it appears that the main novel idea in the paper is the application of GFlowNets to learning combinatorial problems (which is already a worthy one). The idea of training from subsampled transitions instead of a full trajectory is also new. … However, I would be very surprised if the MDPs presented by the authors were absent from the previous literature, especially given the standard nature of problems such as MIS. Perhaps there are some necessary adjustments that are necessary for these MDPs to work with GFlowNets, in which case I will welcome clarification!
> L61: What is new in your MDP design compared to existing formulations?
We do not aim to claim that we “invent” the MDPs, but only want to find efficient algorithms to solve these MDPs. We acknowledge that the design of our MDPs, which is simple yet effective, is very straightforward. In fact, [1] uses similar MDPs as ours; they design MDPs that “add one vertex at each step”, but on minimum vertex covering, maximum cut, and traveling salesman problem. That said, they do this on slightly different CO problems, and they also use different reward designs from ours. Other MDP designs include [2] where each action is to “edit” the current solution, and [3] where each action is to add multiple vertices into the solution set. We will include this discussion to the final version.
[1] Learning Combinatorial Optimization Algorithms over Graphs
[2] Learning to Perform Local Rewriting for Combinatorial Optimization
[3] Learning What to Defer for Maximum Independent Sets
> I am not confident enough to comment on the choice of algorithms in the MIS benchmark, but I have an issue with the 3 non-MIS benchmarks, which seem a bit unfair to me. A major asset of GFlowNets is their sequential construction of the solution, which the authors mention as the reason for their success. On the other hand, the benchmarked competitors ERDOS and ANNEAL rely on a hypothesis of independence between vertices of the graph, which explains their fast inference. In light of this, I argue that it would make sense to benchmark GFlowNets against other methods (typically from the RL literature) that also adopt a sequential perspective, to see if the advantage still holds.
A large part of research in combinatorial optimization is about trading-off the performance and efficiency. One of the advantages of Erdos-based methods including Anneal is the efficiency; on the other hand, even given a large enough time budget (like what we do in the experiments), it will not obtain tremendous improvement. Please note that we have compared with the PPO baseline (from a recent related work [4]) in MIS comparison, which is close to GFlowNet in the sense of sequential generation (but fails to encourage diversity in the solutions -- please find our detailed explanation as below), and we will incorporate more comparison of RL baselines with sequential generation behavior in the final version.
[4] Learning What to Defer for Maximum Independent Sets
> To underline the impact of the paper, it would be helpful to explain why a diverse set of solution candidates is important in practice. As a combinatorial optimizer myself, it is not obvious to me: when should I settle for a set of diverse, but possibly suboptimal solutions?
As discussed in the paragraph starting at line 44, there could be several cases where we want to emphasize diversity. For example, there could be multiple different optimal solutions in combinatorial optimization problems due to the symmetry of problem configurations, as pointed out in [1]. In addition, diversity in the solutions holds great importance across several dimensions, while solely looking for an optimal solution for the current problem could fail:: **Robustness** Multiple solutions can provide a form of robustness. If one solution fails or is not feasible due to changes in the problem environment or constraints, having alternative solutions can be very useful; **Exploration of the Solution Space** A diverse set of solutions allows for a broader exploration of the solution space. This can provide more insight into the structure of the problem and could serve as warm starts for numerical solvers; **Stakeholder Preferences** In many real-world problems, there may be multiple stakeholders with different preferences or objectives. A diverse set of solutions can provide a range of options that cater to these different preferences; **Dynamic Environments** In dynamic environments where the problem parameters can change over time, having a diverse set of solutions can allow for quick adaptation and better generalization to these changes.
[1] Combinatorial optimization with graph convolutional networks and guided tree search
> L105: Where do the final rewards appear?
Thanks for the question. When a state is terminal, then its flow value would be the reward value, which is the source of learning signal grounded by the environment.
> L242: I thought the whole point was to make epochs faster?
The point is to show that our proposed novel GFlowNet variant is better within the same training time. Since different variants consume similar wall clock time for one epoch, we could roughly think of epoch as a measure of training time.
> L255: Are there cases where this reward continuation is not applicable?
To the best of our knowledge, this has demonstrated applicability across the problems we have studied, particularly those where all states belong to the same space.
> L640: Why not use a linear formulation? What formulation did you use for the three non-MIS problems?
We follow what is provided in the MIS benchmark [1]. We thus try two different Gurobi formulations, which achieve average size of $40.14$ in $2:15:07$, and average size of $40.90$ in $2:10:36$, respectively. For other problems we use linear formulations.
[1] https://github.com/MaxiBoether/mis-benchmark-framework
---
Rebuttal Comment 1.1:
Comment: > We do not aim to claim that we “invent” the MDPs, but only want to find efficient algorithms to solve these MDPs.
As early as the abstract (L8) and later in the intro (L61), you state "we design MDPs for different combinatorial problems". It is the verb "design" that I find misleading in this case. Perhaps "adapt" or "leverage" would be better suited.
> We will include this discussion to the final version.
Good idea.
> Please note that we have compared with the PPO baseline (from a recent related work [4]) in MIS comparison, which is close to GFlowNet in the sense of sequential generation (but fails to encourage diversity in the solutions -- please find our detailed explanation as below), and we will incorporate more comparison of RL baselines with sequential generation behavior in the final version.
Perfect, thank you.
> In addition, diversity in the solutions holds great importance across several dimensions, while solely looking for an optimal solution for the current problem could fail [...]
Great explanation, I think it would really strengthen the paper to include it early on, reworking the paragraph starting at L44 in the process.
> The point is to show that our proposed novel GFlowNet variant is better within the same training time. Since different variants consume similar wall clock time for one epoch, we could roughly think of epoch as a measure of training time.
Perhaps I misinterpreted what you mean by "epoch" here. In the case where you subsample the transitions by a factor of $k$, an epoch involves $k$ times more passes? In that case, indeed, the plot is very convincing!
> We follow what is provided in the MIS benchmark [1]
I don't think that's true. I followed the link you gave and ended up on this paper describing the benchmark: https://arxiv.org/pdf/2201.10494.pdf. In Section 2, they state that they use a linear formulation for MIS in the main paper. This is justified in Appendix D2, where they indeed _try_ the quadratic one but conclude that the linear option performs better in nearly all cases.
---
Reply to Comment 1.1.1:
Comment: Thanks for your suggestions and we will improve the writing in revision accordingly.
> I followed the link you gave and ended up on this paper describing the benchmark: https://arxiv.org/pdf/2201.10494.pdf. In Section 2, they state that they use a linear formulation for MIS in the main paper. This is justified in Appendix D2, where they indeed try the quadratic one but conclude that the linear option performs better in nearly all cases.
Notice that we quote the GitHub link instead of their paper. In their GitHub implementation, both formulations are provided. In the early stage of this project, we found that the linear formulation sometimes got error with some weird hardware error information that we fail to identity; thus we run with the quadratic formulation instead. In the rebuttal period, we try two different formulations of Gurobi in MIS large scale tasks, and they got average size of $40.14$ (linear formulation) and $40.90$ (quadratic formulation) respectively. This indicates that the difference of formulation is not significant in our data setting. | Summary: The authors propose an approach for using GFlowNets for solving some NP-Hard combinatorial optimization problems on graphs. The GFlowNet approach is designed to better perform credit assignment for fitting a constructive policy that adds nodes on the graph to a currently considered subset. The authors propose slight modifications of GFlowNet that help it perform well on graph optimization tasks which require long trajectories and improved credit assignment. The authors justify their design decisions empirically via ablation studies. Finally, the authors evaluate the proposed approach against supervised learning, unsupervised learning, heuristics, and traditional OR approaches where applicable. They evaluate on a variety of problems considering max independent set, max clique, minimum dominating set, and max cut. They demonstrate improved solution quality over the investigated baselines and mostly improved runtime over OR baselines. Overall, the paper is a promising initial attempt at applying GFlowNets to solve CO problems on graphs.
Strengths: The main strength of the paper is in addressing limitations of previous RL for CO approaches using GFlowNet. In previous RL-based optimization approaches, the issue of credit assignment is a major challenge as often individual choices have limited immediate impact on the overall solution quality while the long-term impact can be quite large. GFlowNet as an approach aims to perform this credit assignment well as it aims to learn a policy based on a long-term reward.
A second strength is that the authors evaluate the given approach against a variety of baselines on a number of settings and present results on all settings.
Lastly, the writing is quite clear, and the paper gives a useful introduction of how to use GFlowNet for CO problems which can likely lead to future work of employing GFlowNet for broader types of CO problems.
Weaknesses: The main weakness is that the main contribution of the paper is that it is mainly directly applying GFlowNet to CO on graphs with minor modifications of the GFlowNet approach, and justifying their approach with intuition and empirical results. It might be good to either identify 1) specific ways that the GFlowNet methodology might be adapted to be specially suited to CO problems, or 2) ways in which the particularities of GFlowNet might be especially suited for answering interesting questions in CO problems.
Some ideas for 1) it may be helpful to formulate the architecture used in GFlowNet to encode certain properties that help solvers such as being invariant to permutation of the solution generation procedure, special actions for GFlowNet such as deleting components of the current solution. Other ideas might be to modify GFlowNet to make use of alternative information such as supervision from known solutions on the training dataset, precomputed primal solutions with objective values, or integrating solver components like presolve etc.
For 2) it might be helpful to showcase the ability to generate diverse solutions by evaluating the diversity of solutions generated by the GFlowNet in terms of how many unique solutions it produces. Alternatively, one could generate a solution pool and using that to warm start solvers or generate solutions for multi-objective optimization problems.
Overall, it would be interesting to see how the GFlowNet approach could be used outside of improving solve time.
While the empirical results are promising, it is somewhat unfair to compare the given heuristic methods only against gurobi as an exact solver, since gurobi may be spending a lot of time proving optimality in addition to finding the optimal solution. It would be more reasonable to compare against gurobi which is tuned for quickly finding primal solutions and which is set at a very strict time limit, since gurobi may be finding the optimal solution quickly and taking most of its time proving optimality. Additionally, gurobi seems to be solving faster than the proposed approach in the MDS setting.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: [Paragraph starting at line 44] One listed downside of RL in CO is that RL may not produce diverse solutions. It would be helpful to demonstrate how this approach addresses that for instance by demonstrating that the GFlowNet approach does generate diverse solutions compared to the RL approach. For instance, Gurobi is able to return a pool of solutions. It would be interesting to compare the solution diversity between approaches.
For the forward looking technique, would it be possible to use the objective value of a “completed solution” which finds the optimal solution given the partial solution as fixed? Does this need to be a differentiable function of the solution?
It would be helpful to give the variance for the results to determine how close the distribution of results are since they are somewhat close to previous work. Additionally, it would be helpful to give some metrics such as win rate, i.e. the percent of instances for which the given approach was fastest / got the best quality solution.
It would also be helpful to explain the sizes of the train / test datasets and how they were split especially in the case of SATLIB.
It is unclear how this approach might handle CO problems where feasibility is “difficult”. For instance, for TSP on non-complete graphs it may be hard to find a feasible tour. It would be helpful to explain how this method might approach settings where feasibility is nontrivial via constructive methods. Here it may be difficult to sample feasible solutions in the first place and as such it may be helpful to understand how this method would perform. Similarly, it would be helpful to explain how this method would perform in settings where the decisions are more complex than selecting a subset of vertices. This is somewhat hinted at in the paragraph starting on line 195 but it would be helpful to explain this limitation and how it might be addressed.
Small questions:
[line 29] Why is gurobi considered as giving approximate solutions? It not only should give optimal solutions given enough runtime but should also give a bound on the solution quality in order to prove optimality.
[line 258] handcraft -> handcrafted
Table 1, Time of Ours in Large has a leading 0 before the 4:22
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Relevant limitations are addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > directly apply GFlowNet to CO with minor modifications
See the general response.
> formulate architecture to encode certain properties… modify GFlowNet to make use of alternative information
We appreciate the reviewer's insightful suggestions about interesting future directions. Indeed, we use a standard architecture and don't exploit the geometric properties, and the standard scenarios as in previous GFlowNets works that don't allow deleting actions. We acknowledge that these are valuable future directions which could further boost the GFlowNet performance. This indicates that there is still huge potential of GFlowNet in CO problems. Furthermore, our approach is purely unsupervised (without the knowledge of precomputed solutions). On the other hand, this can be an interesting future direction: e.g., we can start from a given solution, and then use the GFlowNet's backward policy to generate trajectories which we know would lead to informative high-reward regions. These high-quality trajectories could be very useful in training as they save a lot of exploration effort.
> diversity of GFlowNet and other methods
We compute the diversity of RL and GFlowNet. For each graph in testset, we let each algorithm generate 30 solutions and compute their mean pairwise distance. The diversity is the mean distance across testset. The distance is computed as the Jaccard distance between two solution vertex sets. In the MIS small task RL and GFlowNet got $0.428$ and $0.618$, respectively. As for Gurobi, it obtains a diversity of $0.312$ when being used with pool solution feature to give multiple solutions. This indicates that GFlowNet produces more diverse samples than PPO; notice that GFlowNet also achieves better performance than PPO on this benchmark in the sense of a larger solution set.
> unfair to compare given methods only against Gurobi… Gurobi faster than GFlowNet on MDS on large task
Indeed, the configuration of using Gurobi to solve combinatorial optimization problems could be flexible and using different formulations might cause different performance. However, we do involve KaMIS, which is a highly specialized solver for MIS problems, in our experiments; we believe KaMIS will achieve at least similar performance to the best tuned Gurobi if not better. What’s more, our Gurobi results are from [1], which has been adopted by many other CO related machine learning papers. Furthermore, our method does use heuristics in algorithmic design, thus it would be unfair to be compared with Gurobi with specialized heuristics. Lastly, we try two different formulations of Gurobi in MIS large scale tasks, and they got average size of $40.14$ in $2:15:07$, and average size of $40.90$ in $2:10:36$, respectively.
While it's true that our method didn't outperform Gurobi on MDS with a large scale dataset, it's important to consider the overall performance. Due to Occam's razor, different methods have different strengths and weaknesses, and their performance vary depending on the characteristics of dataset. The dataset in question may have certain unique characteristics that particularly favor the baseline method. Our method demonstrated superior or competitive performance on almost all datasets, which we believe is a strong indication of its effectiveness and robustness.
[1] https://github.com/MaxiBoether/mis-benchmark-framework
> For forward looking, would it be possible to use... optimal solution? Does this need to be differentiable?
It would be a great and feasible idea to use known optimal solutions. For example, in MIS if we know the optimal solution is of size $a$ and the current set has $b$ nodes, then we can use $b/a\in [0, 1]$ to serve as the negative partial energy $\mathcal{\tilde E}$. We will need to accordingly use $c/a\in [0, 1]$ as the terminating reward where $c$ is the solution set size of the terminating state. In this example, we do not require differentiability.
> variance for the results… other metrics
We repeat 5 runs of GFlowNet, which achieve $19.20\pm 0.08$ on the MIS small scale task and $37.89\pm 0.36$ on the large scale task. This indicates that the performance of GFlowNet is stable and the improvement upon other baselines is statistically significant. Due to limited time, we only calculate the win rate of GFlowNet against Gurobi, which is $0.326$. Notice that this is a nontrivial result as Gurobi is almost perfect on this benchmark, which indicates the advantage of GFlowNets.
> train/test split
As described in Appendix D, for tasks with RB/BA graph we use a training set of size 4000, a validation set of size 500, and a test set of size 500; for SATLIB we use a training set of size 39000, a validation set of size 500, and a test set of size 500. For RB and BA graphs we do uniformly splitting as the data are simulated in the same way for train/valid/test; for SATLIB, we obtained the split from the authors of [1].
[1] DIFUSCO:Graph-based Diffusion Solvers for Combinatorial Optimization
> when feasibility is “difficult”
Indeed, we ensure all obtained solutions are feasible through specific MDP design. For more general CO problems it could be hard to get a feasible solution, let alone to achieve a solution that maximizes some target property. That being said, this poses a significant challenge not only to our method but to all ML approaches in general. To address this issue, one potential approach could be to use a very negative reward when getting infeasible results. This could encourage the model to avoid infeasible solutions. Alternatively, we could consider a soft penalty approach, where the reward is a monotonic function of the number of constraints satisfied. This would allow the model to learn to improve feasibility over time when we gradually use a temperature to make the soft constraint harder.
> Why is Gurobi considered as approximate?
You are right. We originally wanted to say “give approximate solutions with limited time” and will correct this in revision.
---
Rebuttal Comment 1.1:
Title: followup
Comment: I thank the authors for their response and clarifications which have addressed my concerns. I believe this is a good initial work towards solving graph CO problems with gflownet that opens several new followup directions for future work in using gflownet for CO.
---
Reply to Comment 1.1.1:
Comment: We would like to sincerely thank the reviewer who recognized the value of our work and contributed to raising the score of the submission. Our work benefits a lot through the rebuttal. Your expertise, effort, and attention to detail truly made a difference, and we are sincerely thankful for your dedication to the peer-review process. | Rebuttal 1:
Rebuttal: ## General Response
We sincerely thank all the reviewers for your insightful comments and suggestions.
> About the contribution and novelty of our work
While our approach builds upon the general GFlowNet framework, we have made significant designs and innovations to ensure its scalability and effectiveness in tackling complex CO problems which cannot be achieved with direct application. First, our MDP formulations are specially designed for different CO problems and have never been studied in previous GFlowNet works. These designs are crucial to ensure the solution satisfying the feasibility requirement (this is related to another question raised by the same reviewer related to “difficult” feasibility CO). Second, the GFlowNet training scale in this work is much larger in the sense of GPU occupation than any previous ones, and we would like to emphasize that our work pushes the boundaries of GFlowNet's training scale. To cope with this challenge, we develop novel training techniques to enable efficient learning (Sec. 3.3) which serve as a remedy to issues about training efficiency and memory consumption for previous approaches [1]. These techniques are not only applicable in CO tasks but can also be extended to any GFlowNet applications such as molecule design. On the other hand, there are a large number of papers [2] on using RL to solve CO problems and people do not consider them as “direct application”, let alone for GFlowNets which we have discussed to be more appropriate than RL for these tasks.
[1] Trajectory balance: Improved credit assignment in GFlowNets
[2] Reinforcement Learning for Combinatorial Optimization: A Survey | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Understanding and Addressing the Pitfalls of Bisimulation-based Representations in Offline Reinforcement Learning | Accept (poster) | Summary: Bisimulation methods tend to fail when applied to offline RL. The authors performed a bisimulation error analysis and concluded that missing transitions, inevitable when training with a fixed offline dataset, lead to inaccurate metric predictions. Therefore, training policies based on the representation learned with inaccurate predictions will perform poorly. The paper proposes a $\tau$-expectile-based bisimulation operator to overcome this challenge. This operator achieves fully in-sample learning in the limit $\tau\to 1$. The authors also note that reward scaling is crucial when the underlying state-space metric is bounded. Experiments conducted on state-based and pixel-based offline RL problems show that both expectile-based bisimulation operator and reward scaling have a significant impact on the final quality of the learned representation.
Strengths: The authors mathematically analyzed the properties of the original bisimulation operator and the novel expectile-based bisimulation operator. The mathematical techniques used in the proofs and the careful analysis of the error bounds will be very helpful to other researchers trying to work on applying bisimulation-metric-based representation learning to offline RL. The expectile-based based bisimulation operator the authors proposed is also interesting and effective under various settings, as the experimental results reveal. Finally, the paper is overall well-written and easy to understand.
Weaknesses: It is doubtful that learning a bisimulation operator in the offline setting generally fails because of the reasons the paper presents. To facilitate learning by diversifying trajectories, a lot of RL environments impose a limit on the episode length. An environment with a time limit of $T$ should have states that an agent can arrive within $T+1$ timesteps but not in $T$ timesteps. Those states satisfy the conditions of Proposition 4, which means learning a bisimulation operator will fail. This is not the case.
Also, according to line 238, unlike the original bisimulation operator, learning the expectile-based bisimulation operator will succeed because we only consider state pairs with corresponding actions in the dataset. Suppose this is the reason why learning an expectile-based bisimulation operator succeeds. Then why not just omit the last and the second-last states of each trajectory in the dataset during training while using the original bisimulation operator? Then the next-state pairs of every state-pairs left will have corresponding actions in the dataset. It is hard to imagine such a simple solution will work.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Is the proposed algorithm also effective in the online RL setting? Also, it would be helpful for the readers if the paper introduces the definition of $G^\pi$ and a *lifted MDP* before mentioning them in equation (2) and line 139, respectively.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations of their work, and I do not believe this work will have a potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and feedback on our paper. We have included new experimental results and some general discussion, please refer to our generic response and uploaded file to address these points.
- **Question: It is doubtful that learning a bisimulation operator in the offline setting generally fails because of the reasons the paper presents. To facilitate learning by diversifying trajectories, a lot of RL environments impose a limit on the episode length. An environment with a time limit of T should have stated that an agent can arrive within T+1 timesteps but not in T timesteps. Those states satisfy the conditions of Proposition 4, which means learning a bisimulation operator will fail. This is not the case.**
- **Response**: While it is true that many RL environments truncate the episode length (for example, some tasks in Mujoco may have a maximum length of 500), this chiefly pertains to pragmatic considerations surrounding sample efficiency. Primarily, within the Preliminaries, we emphasized that our framework is predicated on an infinite horizon, thus necessitating the discount factor gamma (in contrast, the finite horizon scenario does not require the gamma factor, a distinction that sets it apart from our problem formulation). Secondly, the predicament articulated in Proposition 4 is unaffected by the length of trajectories or the truncation of episodes. At its core, whenever the bisimulation Bellman residual is employed as a surrogate objective in an offline setting, this dilemma surfaces. This arises from the reality that as long as the dataset remains incomplete, the relationship between $\epsilon_{\phi}^{\pi}$ and $\Delta^{\pi}_{\phi}$ does not strictly hold.
- **Question: According to line 238, unlike the original bisimulation operator, learning the expectile-based bisimulation operator will succeed because we only consider state pairs with corresponding actions in the dataset. Suppose this is the reason why learning an expectile-based bisimulation operator succeeds. Then why not just omit the last and the second-last states of each trajectory in the dataset during training while using the original bisimulation operator? Then the next-state pairs of every state-pairs left will have corresponding actions in the dataset. It is hard to imagine such a simple solution will work.**
- **Response**: We may not have entirely understood the reviewer's question. Even if we exclude the last and second-to-last states, the third-to-last states remain. For these particular states, we must still compute $\hat{\epsilon}$, necessitating the sampling of subsequent states. Since we have already dispensed with the second-to-last states, this calculation falls outside the confines of the dataset.
- **Question: Is the proposed algorithm also effective in the online RL setting? Also, it would be helpful for the readers if the paper introduces the definition of G^\pi and a lifted MDP before mentioning them in equation (2) and line 139, respectively.**
- **Response**: Theoretically, as long as the dataset is incomplete, this issue will arise. Specifically, in on-policy settings, the expected bisimulation residual proves adequate for learning representations (as established in Theorem 3); conversely, in off-policy settings, it serves as a deficient estimator, as the missing transitions disrupts the relationship between $\epsilon_{\phi}^{\pi}$ and $\Delta^{\pi}_{\phi}$ (as outlined in Proposition 4), though it is marginally better than offline settings.
Thanks for the recommendation, and we will contemplate reorganizing the structure of the relevant paragraph to enhance its readability and coherence.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal.
**Q1**: I am still not convinced that the incompleteness of the dataset is the primary reason why learning the bisimulation operator fails in the offline RL setting. Bisimulation-based online RL algorithms DBC[1], SimSR[2], and MICO[3] perform well on the DeepMind Control Suite tasks that have a horizon of 1000 timesteps, which is the same as the horizon of D4RL MuJoCo tasks, while assuming infinite horizon by incorporating a discount factor. However, as I previously mentioned, the dataset can never be complete when episodes are truncated at a certain timestep. Then by Proposition 4, DBC, SimSR, and MICO should fail to learn the bisimulation metric and should perform poorly.
**Q2**: I misunderstood the meaning of Equation (8) when I was writing the review. Sorry for the confusion caused.
### References
[1]: Reference [48] of the paper
[2]: Reference [46] of the paper
[3]: Reference [5] of the paper
---
Reply to Comment 1.1.1:
Title: Response to Reviewer BVFy
Comment: Thanks for the reply.
Once again, we would like to emphasize that **in offline settings, the dataset is indeed fixed** and beyond our control regarding how the agent collects it. Therefore, the dataset may suffer from incompleteness, biases, and lack of diversity [A, B]. This "incompleteness" is even more severe compared to online learning settings, where online reinforcement learning involves exploration to interact with the environment and collect data, potentially including new states that have not been seen before.
With all due respect, it is important to note that **the state incompleteness of the dataset is unrelated to how the episodes are truncated**. In the DMC tasks, almost every state has the opportunity to be reached within the given truncated timesteps of 1000 due to exploration, indicating that the agent in some sense is able to perceive the ``complete’’ state space in online settings.
In extreme scenarios, there might be cases where only one state (let's call it Y) can reach a final state (let's call it X), while the probability of any other state reaching X is 0, regardless of the action taken. Another extreme case could occur in goal-conditioned settings, where the goal is 1001 steps away from the initial point, and truncating the horizon at 1000 steps would make the goal unreachable. However, it is important to note that these cases are unlikely to occur in practice. Truncation is typically done to prevent trajectories from becoming excessively long and containing less useful information for training, rather than intentionally preventing the agent from reaching its goal. This is supported by the fact that none of the environments in the DMC tasks [C] evaluated in DBC, SimSR, or MICo align with these two cases.
We hope that this explanation can help the reviewer understand the motivation for proposing our approach. If there are any other confusion or questions that the reviewer may have, please do not hesitate to let us know. We are happy to provide any additional clarification or information that may be needed to ensure a thorough understanding of our work.
[A]: Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.
[B]: Schweighofer K, Hofmarcher M, Dinu M C, et al. Understanding the effects of dataset characteristics on offline reinforcement learning[C]//Deep RL Workshop NeurIPS 2021. 2021.
[C]: Tassa Y, Doron Y, Muldal A, et al. Deepmind control suite[J]. arXiv preprint arXiv:1801.00690, 2018. | Summary: The paper investigates why bisimulation-based methods, which are known to work well in the online setting, fail in the offline case. It hypothesizes that missing transitions and reward scaling issues are a source of this failure, and proposes a way to mitigate these issues. It includes experiments on various benchmarking tasks.
Strengths: 1. According to me, this is an important problem to solve. Recently bisimulation-based representation learning methods have become popular but have focused mostly on the online setting. The insights from this paper are illuminating, especially Theorem 3 and Proposition 4.
2. I like that they considered the MICO distance since unlike other metrics, MICO is a diffuse metric and has additional complications due to the non-zero self-distances.
Weaknesses: It appears that the approach is limited to a single and known behavior policy. I wonder how this could be used in the multiple and unknown behavior policy setting, which is starting to become a more popular setting in offline RL.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. I’d point authors to this new paper by the MICO authors: https://openreview.net/forum?id=nHfPXl1ly7 to see how their approach fits in with this kernel-based bisimulation approach.
2. How do reward normalization schemes impact the results? For example, subtracting min and dividing by max-min (where min/max are the min/max rewards in the dataset), or 0/1 mean/std normalization etc? Are these beneficial when computing the bisimulation metric?
3. Why is the Effective Dimension a good metric in the offline RL case (Figure 1)? The paper just plots the results without justifying the usage. I ask this by considering other offline RL metrics for representations such as distribution shift and coverage [1, 2] in terms of the state-action feature representations.
4. From the experiments, it appears that MICO and SimSR were applied as-is. I wonder if it would be useful to apply an off-policy version of say MICO by either sampling from the policy getting trained or using some version of importance sampling?
5. It would be interesting to include some preliminary results in the appendix on off-policy evaluation (not just control) (keep the main results in tact; I am referring more to an exploratory analysis for the appendix).
6. How is maximization and subject-to constraints (eqn 8) satisfied in continuous action setting?
[1] Instabilities of Offline RL with Pre-Trained Neural Representation. Wang et al.
[2] Learning Bellman Complete Representations for Offline Policy Evaluation. Chang et al.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: See above.
N/A societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and feedback on our paper. We have included new experimental results and some general discussion, please refer to our generic response and uploaded file to address these points.
- **Question: It appears that the approach is limited to a single and known behavior policy. I wonder how this could be used in the multiple and unknown behavior policy setting, which is starting to become a more popular setting in offline RL.**
- **Response**: First, we want to clarify that we do not have access to the behavior policy that was used to collect data. Second, a recent paper [A] proves under mild conditions, that the dataset’s transitions distribution (aka the occupancy measure) obtained from multiple behavior policies can be replicated with a single Markovian behavior policy. This theoretical result rigorously allows any Offline RL algorithm to pretend that the dataset has been generated with the equivalent single Markovian behavior policy and thus the multiplicity (and non-stationarity) of the behavior policy does not present any additional challenge to the Offline RL task.
[A] Laroche, R. and Tachet Des Combes, R. On the Occupancy Measure of Non-Markovian Policies in Continuous MDPs. ICML, 2023.
- **Question: I’d point authors to this new paper by the MICO authors to see how their approach fits in with this kernel-based bisimulation approach.**
- **Response**: Thanks the reviewer for bringing this work to our attention. In the general response, we delve into the suitability of the two techniques we've proposed. The kernel-based bisimulation approach, dubbed KSMe, which adheres to the bisimulation principle, could inherently reap benefits from the EBS technique when applied in offline scenarios. Given that $supp(\mathcal{R})\in[-1,1]$ and $k_1(x, y)=1-\frac{1}{2}\left|r_x^\pi-r_y^\pi\right|$, the immediate similarity is constrained within the interval $[0,1]$. Importantly, the kernel isn't subjected to a specific bound, thus nullifying the need to contemplate the RS technique (aligning with the relationship between KSMe and MICo as elucidated in the said paper - an issue that MICo bypasses as well). Furthermore, it preemptively processes the scale of the rewards prior to considering the bisimulation operator, thus mirroring an insight our paper provides, which is, the significance of a well-scaled reward.
- **Question: How do reward normalization schemes impact the results? For example, subtracting min and dividing by max-min (where min/max are the min/max rewards in the dataset), or 0/1 mean/std normalization etc? Are these beneficial when computing the bisimulation metric?**
- **Response**: This is a good question. To delve deeper into the relevance of the reward scaling technique, we have incorporated an ablation study. In a nutshell, the 0/1 mean/std normalization (referred to as "standardization" in the attached document) does not deliver favorable empirical outcomes. We have furnished the corresponding findings in Figure 1 of the uploaded pdf file and a detailed account in the general response.
- **Question: Why is the Effective Dimension a good metric in the offline RL case (Figure 1)? The paper just plots the results without justifying the usage. I ask this by considering other offline RL metrics for representations such as distribution shift and coverage [1, 2] in terms of the state-action feature representations.**
- **Response**: Thanks for bringing these measurements to our attention. Indeed, the foundational concepts guiding these methodologies bear striking resemblance, as all emanate from feature reachability or the rank covariance of the feature. They are unified in their aspiration to gauge the efficacy of representations, and our choice of one over the other for evaluation was largely discretionary.
- **Question: From the experiments, it appears that MICO and SimSR were applied as-is. I wonder if it would be useful to apply an off-policy version of say MICO by either sampling from the policy getting trained or using some version of importance sampling? It would be interesting to include some preliminary results in the appendix on off-policy evaluation**
- **Response**: Thank you for this insightful suggestion! The question of how importance sampling techniques (such as per-decision importance sampling) fare with MICo and SimSR indeed makes for an engaging question. Theoretically, using importance sampling as an illustration, it can be implemented for off-policy value estimation. However, the instability of importance sampling for long horizons is a recognized challenge. Given that bisimulation measurements bear relevance to the value function in some regard, we anticipate that this approach will grapple with analogous challenges within the bisimulation principle. It's worth noting, though, that despite conventional importance sampling's potential limitations, recent works in off-policy evaluation have emerged to mitigate these concerns. This constitutes a promising avenue of research, and we may well explore the integration and adaptation of these advancements within the context of bisimulation in offline scenarios. We intend to contemplate including a discussion and analysis of related works in the appendix. Once more, we extend our sincere gratitude to the reviewer for these invaluable recommendations.
- **Question: How is maximization and subject-to-constraints (eqn 8) satisfied in continuous action setting?**
- **Response**: The maximization operation in Equation 8 only occurs when $\tau=1$, and it represents solely a theoretical determination. In practical application, we in fact employ Equation 6, and as a result, the maximization operation does not participate in concrete implementations.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for answering my questions. I will keep the score as-is. This seems like a useful paper, especially given the advent of many bisimulation-based representation learning methods and recent work that has shown they tend to fail in the offline setting [1].
[1]: Mengjiao Yang, Ofir Nachum. Representation Matters: Offline Pretraining for Sequential Decision Making. ICML 2021
---
Reply to Comment 1.1.1:
Title: Response to Reviewer ndNc
Comment: Thank you for the positive feedback. We appreciate your efforts in reviewing our work. We will reflect your suggestions in the final version to enable it to be a high-quality paper. | Summary: The paper discusses the pitfall of bisimulation-based methods in offline RL. It identifies missing transitions in the finite dataset as a significant problem for bisimulation-based methods, leading to ineffective estimation. The paper also highlights the importance of reward scaling in bounded cosine distance method.
To address these issues, the paper proposes using the expectile operator for the measurement learning in offline RL, as expectile operator helps prevent overfitting to incomplete data. Additionally, the paper analyzes the upper bound of metric w.r.t. the reward scale and proposes a reward scaling strategy to reduce representation collapse.
Strengths: This paper establishes a connection between the Bellman and bisimulation operators to understand the gap for bisimulation-based approaches in online and offline RL settings. It defines the bisimulation Bellman residual and the bisimulation error and analyzes the gap of between minimizing them.
This paper also analyzes the bounds of the bimiluation-based metric and how the reward scaling affects the cosine distance. It demonstrates that setting the reward coefficient to $1-\gamma$ improves performance compared to using SimSR.
Weaknesses: The motivation for utilizing the expectile operator could be explained more clearly. While expectile regression is applicable to both online [1] and offline RL, its relevance to addressing Proposition 4 and preventing overfitting in the offline RL setting requires further elaboration.
The authors propose reward scaling as a solution to mitigate the limitation imposed by the cosine distance, which has a range of [0, 2]. This range imposes an upper bound on the value of $c_r$. However, Theorem 8 suggests that a larger value of $c_r$ leads to a more accurate approximation of the value function. An alternative approach to overcome the limitations of the cosine distance would be to use other distance metrics such as L1 or L2 distance. Hence, it would be beneficial for the authors to clarify the rationale behind selecting cosine distance over these alternatives.
In the comparison experiments between SimSR+RS(+EBS) and MICo(+EBS), it is possible that RS improves performance due to the numerical stability of neural networks. By upper bounding the range of $G$ to [0, 1], which can be stably predicted by neural networks, RS might contribute to the observed performance enhancement.
The bisimulation Bellman residual $\epsilon_{\phi}^{\pi}$ may not be minimized to zero if $G_{\phi}^{\pi}$ is a cosine distance. Consider the case of $s_i = s_j$. $G_{\phi}^{\pi}(s_i, s_j) = 0$ because it is cosine distance. However, $FG_{\phi}^{\pi}(s_i, s_j)$ is a Łukaszyk–Karmowski distance because $\gamma E_{s^{\prime}_i, s^{\prime}_j} G_\phi^\pi(s^{\prime}_i, s^{\prime}_j)$
is a Łukaszyk–Karmowski distance as $s^{\prime}_i$ and $s^{\prime}_j$ are sampled [2],
and $|r_i^{\pi} - r_j^{\pi}|$ is untractable and usually alternated to $E_{r_i, r_j}|r_i - r_j|$, which is again a Łukaszyk–Karmowski distance[3]. Therefore, $FG_{\phi}^{\pi}(s_i, s_j) \geq G_{\phi}^{\pi}(s_i, s_j)$ if $s_i = s_j$, the equality is taken only if $G_{\phi}^{\pi}(s^{\prime}_i, s^{\prime}_j) = 0$ for all $s^{\prime}_i$ and $s^{\prime}_j$ and
$r_i = r_j$ for all $r_i$ and $r_j$. This can destroy the Proposition 4 because not all $\epsilon_{\phi}^{\pi}$ is zero. However, MICo distance does not meet this issue.
------
Reference
[1] Rowland et al. Statistics and Samples in Distributional Reinforcement Learning. ICML 2019.
[2] Castro et al. MICo: Improved representations via sampling-based state similarity for Markov decision processes. NeurIPS 2021.
[3] Chen et al. Learning Representations via a Robust Behavioral Metric for Deep Reinforcement Learning. NeurIPS 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Lemma 5 and 6 require a citation to [4].
1. How does the value of $c_r$ affect the value bound according to Theorem 8 and Theorem 15?
1. How is Eq. (29) derived from Eq. (28) in Theorem 15 in the Supplementary Material?
1. It is confusing that the main text defines $\Delta$ and $\epsilon$ as absolute values, while the supplementary material defines them as differences. The authors should use different notations (rather than the same $\Delta$ and $\epsilon$) in the supplementary material.
1. What is the value of $\tau$ in Table 1?
1. Some implementation details are missing, including the learning rate and network architecture.
-----
Reference
[4] Ma et al. Offline Reinforcement Learning with Value-based Episodic Memory. ICLR 2022.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors should clarify the motivation of expectile operator and refine the theoretical analysis with cosine distance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and feedback on our paper. We have included new experimental results and some general discussion, please refer to our generic response and uploaded file to address these points.
- **Question: While expectile regression is applicable to both online [1] and offline RL, its relevance to addressing Proposition 4 and preventing overfitting in the offline RL setting requires further elaboration.**
- **Response**: An intuitive explanation is that when considering the on-policy settings, i.e., bisimulation Bellman residuals and bisimulation error are both coupled with policy $\pi$, thereby Theorem 3 can be derived. This is to say that if we can access the optimal policy $\pi^*$ and we apply bisimulation on it, the problem mentioned in Proposition 4 naturally does not exist. However, in practice, it is actually impossible to obtain bisimulation corresponding to the optimal when the dataset is fixed. Therefore, we can only use expectile to strike a balance between behavior and optimal, to alleviate this problem.
- **Question: It would be beneficial for the authors to clarify the rationale behind selecting cosine distance over these alternatives.**
- **Response**: We wish to gently draw the reviewer's attention to the fact that the selection of cosine distance was specified in SimSR[1] instead of ours. The rationale behind their selection likely stems from cosine distance having zero self-distance and being suitable for constructing a unit hypersphere. Notably, in contrast, our proposed methodology does not require a specific distance. In our experiments, we also applied EBS on MICo and showed its superiority. For a more comprehensive explanation of the suitability of techniques we have presented, please refer to the general response.
- **Question: In the comparison experiments between SimSR+RS(+EBS) and MICo(+EBS), it is possible that RS improves performance due to the numerical stability of neural networks.**
- **Response**: To verify the effectiveness of RS, we additionally construct an ablation study, which consists of several different combinations to reward scaling strategy. Please refer to Figure 1 in our uploaded pdf file and our general response for detailed explanations.
- **Question: The bisimulation Bellman residual may not be minimized to zero if $G^\pi_\phi$ is a cosine distance, This can destroy the Proposition 4 because not all $\epsilon^\pi_\phi$ is zero. However, MICo distance does not meet this issue.**
- **Response**: We perceive the possibility that the reviewer might postulate that the MICo distance is impervious to the issue elucidated in Proposition 4. It's important to note, however, that Proposition 4 doesn't purport that the bisimulation Bellman residual will invariably minimize to zero. Rather, it describes a scenario wherein even with the bisimulation Bellman residual at its nadir - zero, there might yet be pairs of states manifesting a bisimulation error that is non-zero. This scenario is entirely unconnected with the distance upon which bisimulation objectives are contingent.
- **Question: Lemma 5 and 6 require a citation to [4].**
- **Response**: Thanks for the suggestion! We will add the citations in the revision.
- **Question: How does the value of c_r affect the value bound according to Theorem 8 and Theorem 15?**
- **Response**: In relation to Theorem 8, as $c_r$ inhabits the denominator of the right-hand side of Equation 11, an enlargement in the magnitude of $c_r$ will precipitate a diminution in the value boundary. Conversely, in Theorem 15, given that $c_r$ solely influences the reward's scale without contributing to Equation 27's derivation, we remain capable of formulating an equation akin to Equation 30. The sole alteration would be the incorporation of the coefficient $c_r$ within the confines of the expectation.
- **Question: How is Eq. (29) derived from Eq. (28) in Theorem 15 in the Supplementary Material?**
- **Response**: Assuming we interpret $\Delta_{\phi}^{\tilde{\pi}}(\tilde{x})$ as a value function and $\epsilon_{\phi}^{\tilde{\pi}}(\tilde{x})$ as reward, then Equation 28 can be re-written as $V(s)=\sum\gamma \mathbb{E}_{s'\sim P^{\pi}}[r(s')]=\frac{1}{1-\gamma}\mathbb{E}_{s'\sim P^{\pi}}[r(s')]$. Consequently, Equation 29 can be deduced in concordance with this formulation.
- **Question: It is confusing that the main text defines $\Delta$ and $\epsilon$ as absolute values, while the supplementary material defines them as differences. The authors should use different notations (rather than the same $\Delta$ and $\epsilon$) in the supplementary material.**
- **Response**: Our motivation stems from the pursuit of preserving succinctness and autonomy in the primary manuscript's notation. Nonetheless, during the appendix's proofing process, we encounter stages necessitating the consideration of forms devoid of absolute values. We will contemplate amending the symbols within the appendix to enhance lucidity when revisions to the manuscript become feasible.
- **Question: What is the value of tau in Table 1?**
- **Response**: The value of $\tau$ is 0.6 for SimSR, and $\tau$ is 0.7 for MICo for all datasets in V-D4RL datasets, we will add this in revision.
- **Question: Some implementation details are missing, including the learning rate and network architecture.**
- **Response**: For d4rl tasks, we employ an Adam optimizer with learning rate of 3e-4, and an identical encoder, i.e., a 4-layer MLP activated by ReLU, succeeded by another linear layer. For v-d4rl tasks, we employ an Adam optimizer with learning rate of 1e-4, and an identical encoder, i.e., four convolutional layers activated by ReLU, succeeded by another linear layer. We will include the corresponding descriptions in revision.
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: Thanks for your response. Most of my concerns have been addressed. I have raised the score to 6.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer KC23
Comment: Thank you for raising your score. We appreciate your efforts in reviewing our work. We will reflect your suggestions in the final version to enable it to be a high-quality paper. | Summary: This paper focuses on understanding and ameliorating the relatively poor performance of bisimulation-based representations in offline RL. The authors identify two main issues: overfitting to incomplete data and reward scaling. When the offline dataset is missing states, the representation that’s learned may not actually reflect the true bisimulation metric of the underlying MDP even if the associated Bellman residual over the dataset is zero. Their approach is to regularize learning with expectile regression to avoid overfitting to the incomplete data. With regards to reward scaling, the authors show that a larger reward scale can produce a more accurate approximation of the value function. Experiments are carried out on continuous control tasks with both proprioceptive and visual inputs.
Strengths: - The theoretical analysis seems carefully done and gives useful, practical insights, and the connection of the bisimulation update operator to policy evaluation is interesting.
- The careful evaluation of the proposed approach in the D4RL experiments is fairly convincing that the modifications proposed by the authors produce some improvement in performance.
Weaknesses: - In the D4RL results, SimSR + RS + EBS only clearly outperforms the other approaches on 3, possibly 4, of the 12 settings.
- It would be helpful to have either more qualitative analysis of the existing experiments or a toy experiment to provide intuition as to the resulting behavioral differences and success/failure cases induced by the authors’ approach rather than just performance plots.
- Running the V-D4RL benchmark over only 3 seeds seems rather low, and no measures of variance are provided. This stands in contrast to the D4RL results, for which not only full learning curves with error shading are shown, but also IQM measurements.
- More discussion (possibly in the appendix) regarding the variation in performance across different datasets would be helpful.
- Clarity could be improved in some areas, particularly in providing more background to the introduction of the expectile-based operator how it aids in in-sample learning / prevents overfitting to incomplete data. More precise language would also be helpful in the reward scaling section.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Can the authors provide learning curves + IQM analysis for the V-D4RL experiment?
- Could the authors run experiments with MICo + RS (and MICo + RS + EBS) as well? If RS is not applicable to MICo for some reason, could the authors add a note either in the appendix or main text explaining why?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors do an adequate job acknowledging the limitations of their approach, but perhaps more discussion would be helpful in the appendix. I don’t see any direct negative societal implications of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and feedback on our paper. We have included new experimental results and some general discussion, please refer to our generic response and uploaded file to address these points.
- **Question: In the D4RL results, SimSR + RS + EBS only clearly outperforms the other approaches on 3, possibly 4, of the 12 settings.**
- **Response**: Our proposed methods RS and EBS are essential for SimSR to achieve success on all datasets. By incorporating RS+EBS, the agent pretrained with SimSR demonstrates superior performance. Additionally, EBS significantly improves MICo's performance on 7 out of 12 datasets, providing strong evidence supporting our analysis. Considering that TD3+BC is already recognized for its rapid convergence and superior performance, it is expected that enhancements may not be substantial on simple datasets. In RL, sample efficiency and convergence rate are crucial. Faster convergence holds more practical significance and is preferred by the community, even if two algorithms have similar performance. This is particularly important due to the extensive training iterations required in RL. Our methodology becomes even more evident in datasets where TD3+BC struggles to converge. We observe that our method converges more rapidly on 8 out of 12 datasets, as evident from the performance curve.
- **Question: It would be helpful to have either more qualitative analysis of the existing experiments or a toy experiment to provide intuition rather than performance plots.**
- **Response**: We have provided examples for each case in Lines 180-188 and Figure 1 respectively. To illustrate the failure case of missing transitions, we consider a scenario where only (s_i, a_i, r, s′_i) and (s_j, a_j, r, s′_j) are present in the dataset, with both rewards equal to zero. The bisimulation Bellman residual could be zero, but the bisimulation error could still be positive in this case. Failure cases like this can be found in many instances. To highlight the importance of selecting a suitable reward scale, we analyze the upper bound of the cosine-based bisimulation - SimSR. We also demonstrate that without a proper scale of reward, the effective dimension of the state feature will dramatically decrease, as shown in Figure 1 Left. These examples aim to provide an intuitive understanding of our proposed techniques. It is possible that the reviewer did not notice these descriptions due to the way we presented. We will consider adjusting the structure to make them more prominent.
- **Question: Running the V-D4RL benchmark over only 3 seeds seems rather low, and no measures of variance are provided. This stands in contrast to the D4RL results, for which not only full learning curves with error shading are shown, but also IQM measurements. Providing learning curves + IQM analysis for the V-D4RL experiment.**
- **Response**: Thanks, we provided a more comprehensive performance comparison on the V-D4RL benchmark in the uploaded file, which is averaged over 10 seeds with one standard error shaded. We have also included IQM measurements among these datasets to demonstrate their effectiveness. These results can be found in Figure 3 and Figure 4 in the attached PDF file. We believe these additional experiments will help address concerns regarding the reliability of our method.
- **Question: More discussion (possibly in the appendix) regarding the variation in performance across different datasets would be helpful.**
- **Response**: Thank you for your suggestion. We will add these in revisions. Different datasets inherently possess unique characteristics, leading to variations in performance [A]. All 12 tasks in Figure 2 are considered somewhat relevant, with each column representing a specific task domain and each row representing the form of data collection. Due to variations in data quality and task nature, it is normal for algorithms to exhibit performance differences across these datasets. More details can be found in [B].
[A]: Schweighofer K, Hofmarcher M, Dinu M C, et al. Understanding the effects of dataset characteristics on offline reinforcement learning[C]//Deep RL Workshop NeurIPS 2021. 2021.
[B]: Fu J, Kumar A, Nachum O, et al. D4rl: Datasets for deep data-driven reinforcement learning[J]. arXiv preprint arXiv:2004.07219, 2020.
- **Question: Clarity could be improved in some areas, particularly in providing more background to the introduction of the expectile-based operator how it aids in in-sample learning / prevents overfitting to incomplete data. More precise language would also be helpful in the reward scaling section.**
- **Response**: Thanks, we will provide more relevant background information and descriptions to help readers follow the technical details more easily in revisions.
- **Question: Could the authors run experiments with MICo + RS (and MICo + RS + EBS) as well? If RS is not applicable to MICo for some reason, could the authors add a note either in the appendix or main text explaining why?**
- **Response**: We have additionally added the corresponding results in Figure 2 in uploaded pdf file, the detailed explanation can be found in general response. We also provided a discussion of the suitability of our proposed techniques in general response. We hope these results/discussions can satisfy reviewer’s concerns.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your response! I found the additional details and experiments helpful, and am more confident about acceptance. I will increase my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer LXid
Comment: Thank you for raising your score. We appreciate your efforts in reviewing our work. We will reflect your suggestions in the final version to enable it to be a high-quality paper. | Rebuttal 1:
Rebuttal: # General response
## 1.The severity of the proposed problem
### How do bisimulation-based objectives perform in other (online or goal-conditioned) settings?
- Various methods, such as DBC[48], MICo[6], SimSR[46], and PSE[A], have consistently demonstrated positive results in online settings, regardless of the presence of distractors. This evidence supports the efficacy of bisimulation techniques in online settings. Additionally, GCB[B] excelled in goal-conditioned environments, ExTra[C] showcased the power of bisimulation metric in exploration, and HiP-BMDP[D] successfully incorporated bisimulation into multi-task settings, highlighting its superior performance, all mostly in online settings too, with little work in offline RL. These studies suggest that when tailored to specific environments, bisimulation methods can excel. Despite these works, bisimulation methods have had little success when extended to offline settings, and our motivation is to tackle this problem.
### While bisimulation objectives used in the offline setting are directly affected by missing transitions, many other representation objectives may not.
- When referring to state representation learning, using bisimulation in offline settings presents challenges due to the two issues we outlined in Line 48-52 in our original submission: the presence of missing transitions and inappropriate reward scales. Concurrently, there exists other representation objectives, like CURL[28], ATC[E], which focus on pairs of states without the explicit necessity for transition information. As a consequence, they do not explicitly require accounting for missing transitions or reward scaling in their objectives. This absence of direct influence sets them apart from bisimulation-based methods. Yet, we consider that bisimulation-based techniques have a theoretical edge and have proven effective in online settings, Thus, we deem that our work is impactful in that it delivers a proof that bisimulation can be successful offline.
[A]: Rishabh Agarwal, et al. Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning. ICLR 2021
[B]: Philippe Hansen-Estruch, et al. Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning. ICML 2022
[C]: Anirban Santara, et al. ExTra: Transfer-guided Exploration. AAMAS 2020
[D]: Amy Zhang, et al. Multi-Task Reinforcement Learning as a Hidden-Parameter Block MDP. Arxiv 2020.
[E]: Stooke, Adam, et al. Decoupling representation learning from reinforcement learning. ICML 2021.
### Compounded effect for bisimulation principle in offline settings
- In online scenarios, state representations and policies are updated concurrently, while in offline settings, state representation is pre-trained before policy learning, with the two phases completely decoupled. Errors during representation learning in offline settings can have a compounded effect on policy learning, leading to significant issues. This is the reason that missing transitions is particularly harmful to the bisimulation principle in offline settings. Although reward scales affect bisimulation universally, as offline settings require pretraining state embedding, any major discrepancy between this fixed representation and the policy parameter space can further undermine the learning process. Hence, the proposed solutions hold promise for enhancing bisimulation's efficiency in offline settings.
## 2.Suitability of different techniques
- For RS: In essence, the given theoretical analysis is applicable across all bisimulation-based objectives. However, the precise settings for $c_r$ hinge on the foundational distance. For instance, SimSR uses the cosine distance which has definitive bounds. As a result, we need to infer the ideal setting from Equation 10 and Theorem 8. In contrast, the MICo-like distance and DBC employ L-K distance and L1 distance respectively, having bounds ranging from $[0,+\infty]$. Consequently, they can adapt to more value settings. We propose our approach as a general method/principle to employ a novel bisimulation metric or distance, especially in the context of offline RL.
- For EBS: We provide EBS as a general method, which is applicable to all bisimulation-based objectives, given that they all adhere to the foundational principle of bisimulation. This principle revolves around the contraction mapping properties similar to the value iteration. Whenever there's an intent to employ bisimulation in offline scenarios, with an aim to reduce the Bellman residual for approximating the fixed point, the outlined challenge emerges. Consequently, EBS holds the potential to enhance any bisimulation-based method, regardless of the distance they use.
## 3.New experiments results
### MICo+RS and MICo+RS+EBS results
- Please see Figure 2 in rebuttal pdf. We provide the results of MICo+RS on the D4RL benchmark, as new experiments for the rebuttal.
### Ablations for RS
- In our work, reward scaling comprises two components: i) min-max normalization, and ii) the determination of $c_r$ as $1-\gamma$. To substantiate the efficacy of our proposed methodology, we considered different combinations of min-max normalization/standardization and various value of $c_r$ (including 1, 0.1, 0.01, and 0.001). The results provided empirical validation of our theoretical exploration.
### Visual-D4RL results update
- In the original submission, the methodologies' performance as previously reported in Table 1 omitted variance. We provide additional results in pdf, showing performance curves averaged over 10 different random seeds, and standard error (shared), accompanied by the IQM metric aggregating overall statistical properties. Results in Figures 3 and 4 show consistency with our empirical performance in Table 1. The results depicted in Figure 3 and Figure 4 in the uploaded PDF file show consistency with the empirical performance in Table 1 in our original submission for image-based settings.
Pdf: /pdf/ff5dd24908cab3f3a6f7394d8ac3290e9e2e5d63.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the problem of bisimulation-based representations in offline reinforcement learning tasks, which are less effective than alternative methods. To address this issue, the paper proposes a tailored reward scaling strategy and an auxiliary loss function based on bisimulation metrics to learn robust state representations. The experiments show that the proposed method outperforms existing methods on a range of locomotion tasks in the DeepMind Control Suite. Ablation studies suggest that the proposed reward scaling strategy and bisimulation-based loss function are critical to the performance of the method. Overall, I believe this work makes a valuable contribution to the field of reinforcement learning by improving the effectiveness of bisimulation-based representations in offline RL tasks.
Strengths: **Motivation and intuition**
- The motivation for addressing the pitfalls of bisimulation-based representations in offline reinforcement learning is convincing. The authors clearly explain the challenges faced by bisimulation-based approaches in offline RL tasks and the need for solutions to improve their performance. Their analysis reveals that missing transitions in the dataset, which often occur in offline settings, are particularly harmful to the bisimulation principle
**Novelty**
- The idea of utilizing expectile operators to prevent overfitting to incomplete data is intuitive and convincing, which is theoretically supported by the paper's analysis. This paper presents an effective way to implement this idea, which has not been explored in previous works.
**Technical contribution**
- The proposed solutions, an expectile-based operator and a tailored reward scaling strategy, for addressing the pitfalls of bisimulation-based representations in offline RL tasks seem compelling, especially when dealing with limited datasets. The authors provide a thorough analysis of the performance gains achieved by their proposed solutions in various benchmark suites.
**Clarity**
- The overall writing is clear and well-organized. The authors utilize figures well to illustrate the ideas, and Figure 1 clearly shows the results with and without reward scaling.
- The paper gives clear descriptions in both theoretical and intuitive ways. The notations, formulations, and theorems are well-explained in the appendix, making it easy for readers to follow the technical details.
**Experimental results**
- The experimental results are impressive and demonstrate the effectiveness of the proposed framework in improving the performance of bisimulation-based approaches in offline RL tasks. The authors provide clear visualizations of the results, and Figure 3 particularly provides a clear comparison of the performance gains achieved by the proposed framework over the two state-of-the-art bisimulation-based algorithms, MICo and SimSR.
**Reproducibility**
- The code is provided, which helps understand the details of the proposed framework.
- Given the clear description in the main paper and the details provided in the appendix, I believe reproducing the results is possible.
Weaknesses: **Ablation study**
The proposed framework comprises two components: an expectile-based operator and a tailored reward scaling strategy. Conducting an ablation study to verify the proposed methods to some naive ones would be good. Take the tailored reward scaling strategy for example, one possible naive reward scaling strategy could be to multiply the reward signal by a constant factor, such as 0.1 or 10, and observe the effect on the performance of the bisimulation-based RL algorithm. Another possible method could be to clip the reward signal to a certain range, such as [-1, 1], and observe the effect on the performance.
**Limitation**
I notice that the authors say that Since MICo does not necessitate a particular upper bound, RS may harm its performance, leading them to exclude the MICo+RS results from the main experiments. It would be good to elaborate more on when the two methods, an expectile-based operator and a tailored reward scaling strategy, can help improve the performance of given algorithms.
**Experiment setup**
I think that adding manipulation and navigation environments to the experimental evaluation of the proposed method would be a valuable addition to the paper. While the locomotion environments used in the current evaluation provide a good testbed for evaluating the proposed method, it would be beneficial to evaluate the method in a broader range of environments to better understand its generalizability and effectiveness across different types of tasks instead of only evaluating on locomotion environments.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See above
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and favorable assessment of our work! We appreciate the time and effort you have put into evaluating our work. Based on your feedback, we included additional rebuttal experiments, on benchmark D4RL tasks and on visual image-based offline tasks, to demonstrate the significance of our approach (please see attached rebuttal pdf). We also prefer a more in-depth discussion regarding the aptness of the techniques we have proposed. Please refer to our generic response for all reviewers to address these points. We proceed with our response to each concern as follows:
- **Question: Conducting an ablation study to verify the proposed methods to some naive ones would be good. Take the tailored reward scaling strategy for example, one possible naive reward scaling strategy could be to multiply the reward signal by a constant factor…**
- **Response**: Thanks for your suggestion! We concur that an extensive ablation analysis would indeed enrich our work significantly. We have incorporated an ablation study of the reward scaling strategy in the attached PDF file. In brief, as deliberated in Lines 260-270 of our paper, only when we employ min-max normalization and set $c_r\in[0,1-\gamma]$, can we guarantee alignment and attain commendable performance; the other combinations of reward scaling strategy invariably fall short across all datasets. Please refer to the general response for more details.
- **Question: It would be good to elaborate more on when the two methods, an expectile-based operator and a tailored reward scaling strategy, can help improve the performance of given algorithms.**
- **Response**: Thanks for your insightful suggestion. We have provided a general response discussing the suitability of different techniques, and we will also add the corresponding descriptions in the revision. Briefly, the theoretical analysis of both techniques is suitable for all bisimulation-based objectives, but their practical applications may vary. Specifically, the Expectile-Based Scaling (EBS) can be applied to any bisimulation objectives when we consider using the bisimulation principle in offline settings. As for Reward Scaling (RS), we only need to consider it when the distance used in bisimulation is tightly bounded. For example, the range of L1/L2 distance and MICo-like distance (diffuse metric) is $[0,+\infty]$, the range of cosine distance is $[-1,1]$. Therefore, bisimulation with L1/L2/MICo, which fall into the former category, do not need to use RS; bisimulation with cosine distance or some similar distance, which fall into the latter category, do require RS. We hope this clarifies any confusion, and we are happy to provide further explanation if needed.
- **Question: adding manipulation and navigation environments to the experimental evaluation of the proposed method would be a valuable addition to the paper.**
- **Response**: We fully agree with your suggestion that including experiments in manipulation and navigation environments would indeed strengthen the generalizability and robustness of our research. However, given the limited time window of the current rebuttal stage and our computational resources, we have prioritized completing some other experimental supplements first. For instance, we have focused on conducting the ablation study on reward scaling in d4rl tasks and running experiments with more seeds in the vd4rl tasks. Regarding the navigation tasks and other experiments, accomplishing them necessitates more time. We plan to explore these experiments later and conduct an evaluation on larger-scale datasets to provide more robust evidence of the potential of the bisimulation method.
Once again, we appreciate the insightful comments and are grateful for the opportunity to improve the quality of our work. If you have any further suggestions or guidance, we would be more than happy to incorporate them into our work.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I appreciate the author's rebuttal, which addresses some of my concerns/confusion. I believe this work studies a promising problem and provides meaningful insights. Yet, I still hope to see a more comprehensive set of experiments, including robot arm manipulation (such as D4RL kitchen) and navigation. In sum, I am still slightly leaning toward accepting this paper, while I won't fight for this paper if the majority of the reviewers have a different opinion.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer aN21
Comment: Thank you for the positive feedback. We appreciate your efforts in reviewing our work. We will reflect your suggestions in the final version to enable it to be a high-quality paper. | Summary: Summary
-------
This submission\'s goal is to understand why bisimulation methods
suffers in the offline RL setting. The authors go on to propose two
methods that help learn a better bisimulation metric, that result in
representations that more faithfully capture state abstractions from
offline datasets. The two methods are 1) (EBS) an expectile operator
that regularizes the learned state representation to in-sample learning
and 2) (RS) a reward scaling where they tune the hyperparameter on the
reward difference (with theory for further motivation). Experiments are
conducted in offline RL using D4RL with either states or pixel-based
observations. The empirical results provide some evidence that RS+EBS
improve the learned bisimulation metric, and hence the state
representation.
Decision
--------
The theoretical results, at the intersection of bisimulation and offline
RL theory, is interesting and novel. Besides the theory, however, I
remain unconvinced by the empirical evidence and the severity of the
proposed problem. As a result, I lean towards rejection. In the D4RL
experiments, it does not seem that there is a benefit to using the
bisimulation metric at all. While there is a purported benefit in the
visual domain, it is not clear that this is due to RS+EBS or the fact
that there are two additional tunable hyperparameters. Moreover, the two
methods do not really address a novel problem. Missing transitions, and
EBS, is an issue for offline RL in general, not necessarily isolated to the bisimulation metrics. Whereas the
reward scale is an issue with the bisimulation metric, and not necessarily isolated to the offline RL setting.
So, while I like the spirit of the paper, I do not find the empirical evidence
convincing.
Strengths: Strengths
---------
- Interesting theoretical insights, combining 1) bisimulation theory
which casts a metric as a fixed point in a lifted MDP and 2) offline
RL theory which shows limitations of RL algorithms on limited
datasets.
- Presentation and initial motivation of the problem, bisimulation in
the offline setting, is clear. There is value in understanding the
role of bisimulation metrics, especially in the case of offline RL
where the learned representation is known to be important for
transfer.
Weaknesses: Weaknesses
----------
- While motivation is initially clear, it seems ultimately misguided.
The results in the reference (\[42\] in paper, Yang and
Nachum, 2021) does not suggest that bisimulation is particularly
poor in the offline setting. The bisimulation results are also poor
in the online setting, and the paper continues to show that several
methods perform worse in the offline setting, which is unsurprising.
- The two methods used to improve bisimulation in the offline setting
are either not unique to issue with bisimulation (missing
transitions) or not seemingly relevant to the offline setting
(reward scaling). Both of these methods do seemingly help improve
learning the bisimulation metric, but the empirical results are not
convincing that this is helpful over the baselines.
- Empirical results on D4RL are not entirely convincing, and the
reasons for excluding MICo+RS are not clear. The results in the
pixel-based D4RL, while seemingly an improvement on average, are
difficult to evaluate without confidence intervals and with only 3
runs.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Detailed Comments & Questions
-----------------
- Line 38: \"objectives in most approaches are required to be coupled
with the policy improvement procedure\" What about their approaches
require coupling with the policy improvement procedure and why is
this important? Is this because they are using a $π^*$-bisimulation?
- Line 39-43: The point you are making here is that bisimulation
metrics do not work well in offline RL. The points about special
cases of online RL seem unrelated and are distracting from this main
point. Moreover, reference \[19\] provides no empirical evidence for
or against bisimulation metrics in offline RL. Moreover, the
empirical evidence in \[42\] suggests that all methods do worse in
the offline RL setting, and does not seem to suggest that
bisimulation in particular suffers in the offline setting.
- Line 48: The problem of missing transitions is an inherent problem
to all offline RL methods.
- Line 50: The problem of reward scale seems unrelated to offline RL?
- Lien 55: \"we can achieve a balance between the behavior measurement
and the greedy assignment of the measurement over the dataset.\" I
do not understand this statement. Are you claiming that the
expectile operator balances a type of exploration-exploitation
trade-off? If so, it is not clear how or why.
- Line 244: \"Most previous works \[4, 45, 5, 43\] have overlooked the
impact of reward scaling in the bisimulation\" The DBC paper in
reference 45 does in fact have a hyperparameter for reward scaling.
I think this sentence needs more qualification, because most
combined losses include a hyperparameter for tuning their relative
importance.
- Line 248: I have a hard time interpreting the conclusion based off
of the inequality following (10). The result is an upperbound on the
fixed point and both $c_k$ and $c_r$ are free variables on the right
hand of the inequality. But, what does this inequality tells us
about the trade-off of setting, for example, $c_k = 0$ thus
upperbounding the fixed point by 0? Different settings of $c_k$ and
$c_r$ lead to different fixed points, $G^\pi_\tilde$. By using this
inequality to set an upperbound on the distance, you are also
biasing the metric. For example, for smaller $c_k$, you are
considering states similar if the resulting immediate behavior is
similar and putting less weight on differences in long-term
behavior.
- \" However, when bisimulation operators are instantiated with
bounded distances, e.g., cosine distance, such a setting may be
unsuitable.\" I don\'t see why this is inherently undesirable. The
bisimulation distance aggregates the cosine distance across a
trajectory via the bellman recursion. I do not see why they need to
have the same upper bound. What you are attempting here seems like,
in an RL analogy, putting the rewards and the value on the same
scale.
- \"with the maximum value of 1 of the cosine distance. To achieve a
tighter bound in Equation11, we should then maximize the reward
scale, setting cr to 1 − γ.\" I do not think that c~r~ is a free
parameter that can be chosen to tighten the bound. This is because
$c_r$ also determines the fixed point of the bisimulation metric and
the learned representation. But, accepting your result for the
moment, I do not see why you set $c_r$ to be $1-\gamma$ and not some
arbitrarily large number.
- Analysis of Figure 2: I am not sure what to take away from these
results. Does the TD3BC baseline use any of the learned
representations, or is it trained from scratch from the raw state?
The goal of the paper is understanding and addressing the pitfalls
of bisimulation representations in offline RL. But, these result
suggest that bisimulation representations are neither needed nor
helpful for the downstream task. The motivation is undermined: even
if bisimulation representations are augmented with your suggested
changes (expectiles + reward scaling), we get roughly the same
performance as not using any bisimulation at all.
- MICo + RS: what does it mean that MICo does not necessitate an
upper-bound?
- Table 1: The results are not entirely convincing because the
pixel-based offline RL experiments are known to have high variance.
Without reporting confidence intervals, and by uisng only 3 seeds, I
do not find this evaluation convincing.
- Ablation: The reward scale coefficient, while motivated by theory,
is not ablated.
Minor Comments
--------------
- Line 32: No question has been posed yet, so it is not clear what
question this paper is answering. A question does occur a few
paragraphs down, so maybe this statement is an artifact?
- Figure 2: Some lines are dashed while others are not, and this is
not indicated in the legend.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors outline limitations in the expectile operator hyperparameter. Further, I see no potential negative societal concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and feedback on our paper. We have included new experimental results and some general discussion, please refer to our generic response and uploaded file to address these points.
## The severity of the proposed problem
Our study focuses on offline state representation learning, instead of offline RL algorithms or offline policy optimization per se. In this paradigm, policy improvement is achieved on top of the learned representation space, and using bisimulation methods to learn representations. Thereby, missing transitions can impact both offline RL algorithms and the bisimulation objective, causing compounding errors. In contrast, other representation objectives [A,B] are not affected by missing transitions since their objective do not explicitly depend on the transitions and underlying dynamics, for which other objectives have so far been shown to be beneficial in offline settings, unlike bisimulation. This is the primary contribution of our work.
In addition, reward scaling is a crucial concern in the bisimulation principle, where a severe discrepancy may occur between the parameter space of the representation and the policy due to the cumulative effect, intensifying representation collapse in offline environments. In contrast, bisimulation objectives have been shown to be effective for online environments, therefore we take motivation from past works to adapt bisimulation methods to be effective for offline RL.
Additionally, TD3+BC has been shown to be an effective algorithm for rapid convergence and performance in offline RL already; in datasets where TD3+BC struggles to converge, we particularly show the significance of our methodology to be more palpable. Experimental results show that our method converges faster, leading to improved results, on almost 8 out of 12 datasets on standard benchmark settings.
[A]: Mengjiao Yang, Ofir Nachum. Representation Matters: Offline Pretraining for Sequential Decision Making. ICML 2021
[B]: Max Schwarzer, et al. Pretraining Representations for Data-Efficient Reinforcement Learning. NeurIPS 2021
## Detailed questions
Due to the character limits, we answer per questions below:
- In online settings, most approaches use $\pi$-bisimulation as opposed to $\pi^*$-bisimulation, considering that we cannot ascertain $\pi^*$ during policy improvement. Theoretically, employing bisimulation as a representation objective in online settings mandates iterating bisimulation objectives until they converge to their fixed point at each singular policy $\pi$ during policy iteration. However, this is computationally inefficient. A more comprehensive analysis is available in Appendix A of the DBC paper.
- (Follow above) Conversely, in offline settings, the behavior policy of the offline dataset is generally considered a fixed one. This is the primary distinction between online and offline settings for bisimulation. Consequently, superior performance in online settings should intuitively lead to similar results in offline settings. Paper [42] empirically substantiated the subpar performance of bisimulation in offline settings, and Paper [19] provided a clear corresponding explanation in Section 2.
- We are concerned more with representation learning here instead of offline RL methods themselves. Please refer to the general response.
- In general response.
- No, the expectile operator strikes a balance between behavior and optimality, not exploration-exploitation. We have furnished a background description and insight into utilizing the expectile operator in Appendix C.5.
- We revise our description to ensure clarity and accuracy. Our objective is to provide theoretical guidance on adjusting reward scaling, rather than introducing a new hyperparameter. While it is true that many combined losses include a hyperparameter to balance their relative importance, our focus is distinct. We aim to guide the balance between immediate similarity and long-term similarity, considering the properties of the associated distance.
- Indeed, a smaller $c_k$ will attribute lesser weight to variations in long-term behavior. When $c_k$ is set to zero, we formulate bisimulation purely via immediate reward, signifying the most greedy way to learn representation. This is precisely why we retain $c_k$ as fixed at $\gamma$, aligning with the settings in value iteration.
- This does not exactly parallel the occurrences for rewards and value. When we aggregate bounded distances such as cosine distance, the precise result we obtain is $cos(\phi(x),\phi(y)) \in [-1,1]$. This indicates that we cannot procure any value outside this boundary, with the upper limit being inaccessible. When considering reward and value, we seldom utilize cosine distance to approximate the value (we scarcely employ tanh activation for value networks either).
- Please refer to the above response and Line 260-270 in our main paper.
- The others please refer to the general response.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications, I have been convinced at least of the importance of the problem addressed. I accept that missing transitions are also uniquely problematic to Bisimulation-based representations because they are learned via a bellman backup and inherit the flaws of RL methods in offline settings.
On the problem of reward scale, however, I remain somewhat puzzled. I do understand that if G is a consine distance, then it may be problematic to learn an embedding that respects G(\phi(x), \phi(y)) \approx |r_x - r_y| + \gamma G(\phi(x'), \phi(y')) (removing expectations for simplicity). This is because G < 1 while |r_x - r_y| >> 1. But this uses a rather naive implementation of the cosine distance and it seems that the MICo paper adds the norms of the representations in addition to the cosine distance which would prevent this issue (section 5, Castro et al. 2022). Furthermore, adding scaling terms to either the distance or the reward function is not new (original dbc paper had this), and the analysis provided for setting c_r to be 1-\gamma is interesting but ultimately unconvincing. Why should state similarity be weighted differently from value estimation? Put another way, if an agent is trying to optimize the sum of reward for a specific gamma, why should its state representation be using a different weighting?
While reward scale can be an issue, it should also be an issue in the online setting and it doesn't seem to be so. At the same time, the results in Figure 2 suggest that reward scaling seems to be a bigger improvement to performance than EBS. Which suggests that the reward scaling chosen is a more pertinent issue than the underlying offline RL problem.
Overall, the rebuttal has helped clarify the significance of one of the main problems: that of missing transitions. And there is value in raising this point. However, I still remain unconvinced of the overall submission's contribution and empirical evidence.
---
Reply to Comment 1.1.1:
Title: Response to vGKe (1/2)
Comment: Thank you for your further detailed exploration of our work. Here, we provide point-by-point responses to your questions:
**Q1: MICo paper adds the norms of the representations in addition to the cosine distance which would prevent this issue (section 5, Castro et al. 2022)...**
**A1**: This statement is partially true. The standard parameterized distance in MICo is represented by $U_\omega$, which belongs to a partial metric space[1]. This partial metric space yields an associated metric space, where the distance between two points x and y in that metric space is given by $d(x,y) = 2U_\omega(x,y) - U_\omega(x,x) - U_\omega(y,y)$. In practical computation, MICo utilizes the "angular distance"[2] as the associated metric, as described in Section 5 of the MICo paper. It is important to note that MICo introduces a hyperparameter/scalar $\beta$ to scale the angular $\theta$. The choice of $\beta$ can significantly impact the performance, as shown in Figure 13 of the MICo paper. Therefore, instead of relying on the norms of the representations to address this issue, the scalar $\beta$ may help alleviate it, while in the below response, we will show that $\beta$ plays a similar role as reward scaling.
[1]: Bukatin M, et al. Partial metric spaces[J]. The American Mathematical Monthly, 2009, 116(8): 708-718.
[2] Wikipedia. Cosine similarity. https://en.wikipedia.org/wiki/Cosine_similarity#Angular_distance_and_similarity.
**Q2: Adding scaling terms is not new...**
**A2**: We would like to emphasize for another time that the main contribution of our work is not simply adding a scaling term, but rather providing a theoretical guarantee for its choice. Our contribution can be seen as a theoretical guide that helps researchers understand the relationship between the reward scale and the distance coupled with bisimulation when they opt to use a new distance metric. We acknowledge that previous works have also incorporated such a term, albeit mostly as a user-specified hyperparameter (e.g., DBC and MICo). In offline RL, we generally can't do hyperparameter search so any theoretically motivated heuristic for setting hyperparameters is of high interest.
**Q3: Why should state similarity be weighted differently from value estimation?**
**A3**: Firstly, reward rescaling does not have an impact on policy optimization, as supported by Appendix A of DBC's paper. Additionally, value estimation could use the same reward scaling with no consequence, and therefore there is no inconsistency at this level.
Secondly, reward scaling aims to align the scale of the reward with the distance used. It can be regarded as a hyperparameter within the representation approach itself. An analogy can be drawn with contrastive learning (CL) methods in computer vision, which also involve multiple hyperparameters, yet their existence does not affect downstream tasks like classification. Similarly, in our case, bisimulation approaches can be seen as analogous to CL, with value function learning and policy improvement being the downstream tasks.
**Q4: It should also be an issue in the online setting and it doesn't seem to be so...**
**A4**: Indeed, this issue also arises in online settings. We additionally constructed an experiment on SimSR in cheetah run tasks ( a DMC task) in online settings. We present some initial results here:
| Steps | 10000 | 20000 | 30000 | 40000 | 50000 |
| -------- | -------------- | ---------------- | --------------- | -------------- | -------------- |
| SimSR+RS | **143.95 (48.71)** | **266.7275 (41.82)**| **394.67 (45.11)** | **446.69 (21.33)** | **557.43 (24.97)** |
| SimSR | 115.81 (30.39) | 237.30 (77.71) | 362.02 (78.90) | 410.19 (71.36) | 463.09 (27.77) |
We evaluated the average return over 4 seeds, each column represents the average return at the specific gradient steps. It shows that RS is substantially effective in online settings as well.
Despite this positive result in the online setting, we would like to stick to our story for the submission. Indeed, our approach is justified by the compounding analysis in the Offline setting. We further support this compounding effect in the Offline setting with an experiment showing that the Bisimulation error may remain high under a small Bellman bisimulation residual error. Finally, we empirically show that RS yields a significant performance improvement in the Offline setting. In our opinion, this forms a well-rounded scientific contribution in itself. We will use this successful but preliminary online experiment to serve as an exciting opening for future studies on reward scaling, both theoretical and empirical. We would like to deeply thank reviewer vGKe for inciting us to investigate the online setting. | null | null | null | null |
Cocktail: Mixing Multi-Modality Control for Text-Conditional Image Generation | Accept (poster) | Summary: This paper proposes an approach to enhance the controllability of generative image models through the fusion of multiple modality signals.
The proposed model is a strict generalization of ControlNet which is able to incorporate geometric constraints into the generation, including pose, edges, and segmentation map.
Strengths: The paper describes a very practical generalization of a SOTA approach to controllable generative models. It is well motivated, provides a clear description of the architecture, and both quantitative and qualitative results that validate the approach.
Weaknesses: None. The paper is very clear on the benefits and limitations of the approach, and provides a practical improvement over a SOTA approach that is relevant to the research community and downstream users of the technology.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: None.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations adequately described.
It is could be a good idea to include a sample of random results (not cherry picked) in the appendix in order to provide a sense of typical failure modes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply grateful for your comprehensive review and favorable assessment of our paper. Your commendation regarding the soundness, presentation, and contributions of our research not only encourages us but also validates the effort we have invested in this work.
We will try to include more samples without cherry pick.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional details. | Summary: This paper studies text-conditional diffusion models and proposes a pipeline for image generation with multi-modal control signals. The pipeline contains three new ingredients. Firstly, the paper introduces gControlNet, a generalized version of ControlNet, that can take different conditions (such as sketch, segmentation and pose maps) simultaneously as input without increasing the ControlNet size linearly with respect to the number of conditions. Secondly, the paper proposes to inject features produced by gControlNet into the backbone model via normalization, instead of a simple sum as used in the original ControlNet. Thirdly, the paper introduces spatial sampling that aims to harmonize spatial arrangements from both text prompt and image controls. Experiments show that the proposed method achieves on-par quantitative performance compared with ControlNet and T2I-Adapter, and improved visual quality especially with multiple modalities.
Strengths: 1. ControlNet proves to be a powerful approach for controllable text-to-image generation, while its original form requires fine-tuning a ControlNet network for each condition, which is resource consuming. This paper proposes a method to jointly train a shared network that can deal with different types of control input, which may extend the applicability of ControlNet in many cases.
2. The paper has done extensive quantitative evaluations to compare the proposed method with previous work. For generative models quantitative metrics may not be able to measure the actual quality of generated images, but it is important to study what are the good metrics and how to design better ones. This paper makes an effort towards the direction.
Weaknesses: 1. The gControlNet is a crucial component in the proposed pipeline, but the paper does not discuss details of the downsampling network M, either in main text or Appendix. On the other hand, it seems the controllable normalization and spatial guidance do not show significant improvement over their counterparts in previous work. In Figure 6 gControlNet without ControlNorm has much worse image quality, but is it due to insufficient training in this case (as the original ControlNet does not use ControlNorm and performs well)? As for spatial guidance sampling, the paper shows the effect of retaining background, but not placing objects to the correct location. The latter may not be an issue since sketch, segmentation and pose maps contain spatial information already. This seems to weaken the role of spatial sampling, which requires manual design that is not present in paper.
2. The paper uses Stable Diffusion 2.1 as backbone for training gControlNet, while ControlNet and T2I-Adapter used Stable Diffusion 1.5 as backbone. Therefore it may be unfair to directly compare gControlNet with ControlNet and T2I-Adapter. Also it seems that gControlNet does not show much qualitative improvement over ControlNet in single condition cases.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. What is the architecture of the downsampling network M in gControlNet? Can it deal with varying number of input conditions, and deal with new conditions during inference that are unseen in training?
2. In spatial guidance sampling, how are M^{pos} and M^{neg} determined? Are they instance-specific or universal for all instances? It would be interesting to see, in challenging cases, how this module can help resolve conflicts among different input condition maps, and layer arrangement in complex scenes.
3. How are the three types of control used in training? Are all of them or only some of them used in each update step?
4. (Minor) The mapping notation in Eq. (1)-(3) is confusing.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper discusses limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed reading and valuable insights concerning the gControlNet and its components. Allow me to clarify your concerns:
#### **Regarding the downsampling network $\mathcal{M}$:**
$\mathcal{M}(C^k)$ is a simple convolutional network consisting of eight convolutional layers and seven SiLU activation functions, with the final convolution layer being zero initialized. Each $\mathcal{M}(C^k)$ has 1.29M parameters and each modality will have a corresponding $\mathcal{M}$, e.g., in our case we include $3$ different modalities and our $\mathcal{M}$ will have 3.88M parameters in total. It can effectively handle any number of trained modalities but is unable to deal with unknown modalities, meaning the model can process subsets and the exact set of training modalities.
#### **On gControlNet without ControlNorm in Figure 6:**
The training epochs for gControlNet without ControlNorm and gControlNet with ControlNorm are the same. But it is worth noting that *gControlNet without ControlNorm* also has a mixed network $\mathcal{M}$, and the lack of ControlNorm in the former prevents the effective balancing of different modalities, leading to a decline in image quality. It precisely demonstrated ControlNorm significant improvement according to the conditional inputs. And unlike T2I and ControlNet trained different modalities separately, gControlNet w/ or w/o ControlNorm co-trained $\mathcal{M}$ with different modalities.
#### **Regarding spatial guidance sampling:**
Spatial guidance requires manual setting of keywords or latent bounding boxes of objects during inference. $M^{pos}$ and $M^{neg}$ are case-specific, but once provided, $M$ generates them based on the time step $t$. It's also important to note that we did not utilize the spatial guidance strategy in our quantitative experiments.
#### **On the comparison between gControlNet, ControlNet, and T2I-Adapter:**
We acknowledge the different backbones used for training the models. However, as illustrated in [1], when CFG>5, the FID performance of SD2.1 is approximate to SD1.5, and the CLIP performance is inferior to SD1.5. This is consistent with Table 1 in our main text. And the human preference win rate of SD2.1 is slightly lower than SD1.5 as shown in [2]. Our paper primarily aims to demonstrate the ability of additional modalities to control image generation. The control ability of our proposed method is substantiated effectively in Appendix Table 1, thus making the comparison relatively fair.
#### **The control signals usage during training:**
Thank you for raising this question, as it is indeed worth emphasizing in the paper. During the training process, we randomly select the number of modalities and apply random masking within certain areas of the chosen modalities. Specifically, assuming there are three known modalities, we randomly select 0 to 3 modalities during training, and for each modality, we choose arbitrary areas as input signals. That is to say, within the given control signal domain, there may be a single control singnal ($n=1$), or an overlap of $n \in \\{2,3\\}$ control signals, or there may be no control signals ($n=0$).
#### **The mapping notation in Eq. (1)-(3) is confusing.**
We will revise the manuscript to improve clarity of notation.
[1] https://github.com/Stability-AI/stablediffusion, Fig: https://github.com/Stability-AI/stablediffusion/raw/main/assets/model-variants.jpg
[2] https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0, Fig: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/comparison.png | Summary: This study introduces ChatIR, a chat-based image retrieval system that engages in a conversation with the user to clarify their search intent and retrieve the desired image from a large corpus. The system leverages Large Language Models to generate follow-up questions to an initial image description and achieves a success rate of over 78% after 5 dialogue rounds, compared to 75% when questions are asked by humans and 64% for a single shot text-to-image retrieval.
Strengths: The strength of this submission lies in its clear and impactful contribution, which proposes a pipeline that combines multiple modalities such as edge, pose, segmentation mask, and more. The authors effectively communicate the significance of this research contribution, highlighting the potential of integrating diverse modalities to enhance the proposed method's performance. Additionally, the submission features a visually appealing and clear illustrative depiction of the method, which effectively aids in explaining the approach. The selection of an attractive and informative illustration further enhances the clarity and appeal of the paper.
Weaknesses: One notable weakness of the submission is the lack of sufficient explanation regarding the experimental results. While the paper goes into detail in explaining the proposed method, it falls short in providing a thorough analysis and interpretation of the results. In Table 1, there are inconsistencies in the trends of metric scores between the proposed method and the two baselines, yet the authors do not adequately explain these discrepancies. It is crucial to understand why the proposed method may not perform as well in certain metrics, whether it is due to the limitations of the metrics themselves or issues with the method's performance. Merely providing cherry-picked demonstrations is insufficient and does not substitute for a comprehensive analysis of the results. It is essential to address these gaps in the paper by providing a detailed explanation and interpretation of the observed trends and discrepancies in the experimental results.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: It would greatly enhance the clarity of the paper if the authors could explicitly list the implemented and supported modalities in the abstract or introduction. Currently, the detailed setup is not revealed until Section 4, with the exception of Figure 1. Providing this information earlier in the paper would help readers gain a clear understanding of the modalities that are incorporated and supported in the proposed approach. Including this information in the abstract or introduction would improve the overall organization and accessibility of the paper, allowing readers to quickly grasp the key aspects of the submission.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have included discussions of the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful comments and suggestions regarding our work. In the revised version, we will restructure the manuscript to improve clarity. Moreover, I would like to address your concerns about the performance of our model across different experimental data and the supported modalities:
**Concerns about the performance**
The metrics used in the main manuscript focus on the quality of image generation and the degree of text-image alignment. As can be seen in Table 1, our model outperforms the baseline in most generated results (Aesthetic Score, HPS, etc.). However, concerning text-image alignment, such as the CLIP Score, one hypothesis might be that the introduction of control signals from other modalities could weaken the control from the text part, potentially resulting in a lower CLIP Score than the baseline. And FID score strongly related to the training dataset, but due to the resource limits, we only use a subset of LAION which may decrease the diversty and lead to a higher FID.
We wish to emphasize that our work prioritizes the controllability of generated images while keeping the generated images quality, and for this reason, we introduced a new set of measures in the appendix.
We assess our control ability by evaluating the similarity between the processed generated image and the given condition signal. To illustrate, we obtain the HED of the generated image and then calculate its L2 loss relative to the HED provided as a conditional input. This measure effectively demonstrates our method's capability to control other modalities and shows that it performs better than the baseline even in multi-modal scenarios. We believe this provides a comprehensive understanding of the advantages of our proposed method. Please feel free to let us know if further clarification is needed. Here we attach the table, which is also included in the suppl. file of the submission.
| | Similarity (LPIPS$\downarrow$) | Sketch Map (L2 Distance$\downarrow$) | Segmentation Map (mPA$\uparrow$) | Segmentation Map (mIoU$\uparrow$) | Pose Map (mAP$\uparrow$) |
|---------------------------------------------------------------------|-------------------------------------|--------------------------------------|----------------------------------|----------------------------------|----------------------------------|
| Multi-Adapter | 0.7273 $\pm$ 0.00120 | 7.93310 $\pm$ 0.01392 | 26.30 $\pm$ 0.242 | 13.98 $\pm$ 0.177 | 40.02 $\pm$ 0.761 |
| Multi-ControlNet | 0.6653 $\pm$ 0.00145 | 7.59721 $\pm$ 0.01516 | 36.59 $\pm$ 0.273 | 22.70 $\pm$ 0.229 | 38.19 $\pm$ 0.761 |
| Ours w/o ControlNorm | 0.4900 $\pm$ 0.00141 | **7.18413 $\pm$ 0.01453** | 48.26 $\pm$ 0.287 | 32.66$\pm$ 0.272 | 61.93 $\pm$ 0.775 |
| Ours | **0.4836 $\pm$ 0.00133** | 7.28929 $\pm$ 0.01385 |**49.20 $\pm$ 0.289** |**33.27 $\pm$ 0.271** | **61.99$\pm$ 0.778** |
**Supported Modalities**
We trained the network with Segmentation map, Pose and Sketch map. It can effectively handle any number of trained modalities but is unable to deal with unknown modalities, meaning the model can process subsets and the exact set of training modalities. We will provide a more clear statement in introduction of supported Modalities. It is also worth noting that the supported modalities can be extended with sufficient dataset.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the rebuttal response along with additional experimental results.
I acknowledge that I have read the response. | Summary: The paper presents Cocktail, a novel pipeline for multi-modal and spatially-refined control in text-conditional diffusion models. The authors address the challenge of ambiguous descriptions in linguistic representations by incorporating additional control signals. They propose three main components: gControlNet, ControlNorm, and a spatial guidance sampling method.
gControlNet is a hyper-network designed to align and infuse control signals from different modalities into the pre-trained diffusion model. It can accommodate flexible modality signals and allows for the simultaneous reception of any combination of modalities or the fusion of multiple modalities. This capability eliminates the need for manual intervention and equilibrates the disparities between modalities, making the system more flexible and capable of seamlessly supporting multiple control inputs.
ControlNorm is a controllable normalization method that optimizes the utilization of information within branched networks. It decouples control signals, allowing for better representation of both semantic and spatial aspects. By preserving semantic information while conveying spatial information, ControlNorm overcomes the limitations of previous methods that either ignored semantic information or led to the loss of it through normalization. The proposed method effectively interprets conditional information and demonstrates its interpretative capability through generated images.
Additionally, the paper introduces a spatial guidance sampling method that ensures precise control over the generative process. By modifying the attention map, the method incorporates control signals into the backbone network, preventing the generation of undesired objects outside the specified regions. This approach enables the generation of high-quality images that closely align with the input conditions.
Overall, the proposed Cocktail pipeline addresses the challenges of ambiguous descriptions in text-conditional diffusion models and achieves multi-modal and spatially-refined control. It offers a novel approach to integrating control signals from various modalities, optimizing their utilization, and generating high-quality images with fidelity to external signals.
Strengths: - The paper introduces a novel pipeline called Cocktail that enables multi-modal and spatially-refined control in text-conditional diffusion models, addressing the challenge of ambiguous descriptions in linguistic representations. This is a significant contribution to the field as it tackles an important problem in generative models and expands the capabilities of text-guided image synthesis.
- The proposed pipeline consists of three well-defined components: gControlNet, ControlNorm, and a spatial guidance sampling method. Each component is carefully designed to address specific challenges and effectively integrate control signals from different modalities. The approach is comprehensive and demonstrates a systematic solution to achieve high-quality synthesis and fidelity to multiple external signals.
- The paper provides extensive experimental results and evaluation metrics, comparing the proposed Cocktail pipeline with state-of-the-art methods. The results consistently show that Cocktail outperforms existing approaches in text-guided image-to-image translation across multiple modalities. This empirical evidence demonstrates the effectiveness and superiority of the proposed method.
- The paper emphasizes the practicality and efficiency of the Cocktail pipeline by demonstrating its ability to accomplish multi-modal control within a single model. This not only simplifies the model architecture but also reduces the computational overhead associated with multiple branch networks. The approach is both technically sound and resource-efficient, making it highly applicable in real-world scenarios.
Weaknesses: - The authors dont specify how they control for potential misuse of their model. A model like this if open sourced, can have wide ethical impacts with people using it for unethical means like creating deep fakes and modifying images which could potentially be unethical. This should be addressed by the authors.
- The paper does not provide a thorough discussion or analysis of the computational complexity or efficiency of the Cocktail pipeline. Considering the potential computational overhead of incorporating multiple modalities and the fusion process, it would be valuable to address the computational requirements and scalability of the proposed method
- The evaluation section could be expanded to include a more comprehensive analysis of the results. While the paper mentions various evaluation metrics, it would be valuable to discuss the limitations of these metrics and provide additional qualitative analysis or user studies to further validate the effectiveness of the proposed method.
- Although the paper compares the proposed Cocktail pipeline with state-of-the-art methods, it does not thoroughly discuss the limitations and failure cases of the proposed method. Understanding the shortcomings and potential failure modes is crucial for assessing the robustness and generalizability of the proposed approach.
- Implementation details seem sparse and might hinder reproducibility for the paper
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - How does the model handle conflicting signals provided by different modality signals, for example, if the provided pose and mask conflict with each other, how is the model supposed to handle this?
- How does the method prevent users from misusing the model from generating sensitive images or modify images to create unethical content
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: - The authors dont specify how they control for potential misuse of their model. A model like this if open sourced, can have wide ethical impacts with people using it for unethical means like creating deep fakes and modifying images which could potentially be unethical. This should be addressed by the authors.
Flag For Ethics Review: ['Ethics review needed: Inadequate Data and Algorithm Evaluation', 'Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your insightful comments and questions. Your feedback helps us to address important aspects of our work that deserve more detailed consideration. Here's our response to your observations.
#### **Ethical Considerations:**
We acknowledge your concern about the potential misuse of our model. The images that our model can generate heavily depend on the training dataset used. Currently, our model's training data strictly complies with open-source licenses and has been carefully selected, providing some degree of safeguard against misuse. In situations where the model generates certain objects (including human being), many details can be easily distinguished. Going forward, we are considering the addition of watermarks to the generated images to prevent unauthorized use, along with the implementation of a monitoring network at the output stage. Similar tools are already widely utilized in projects like Stability / DALL.E2, and we believe our model can also benefit from adopting such measures.
#### **Computational Complexity:**
One of our key objectives is to alleviate the significant overhead associated with multi-modal control. We are pleased to provide specific details of our model's parameters to clarify the contributions of this work.
Firstly, we stipulate that the parameter count for our employed SD base model stands at 865.91M, with a single ControlNet having 378.50M parameters. Our fusion network $\mathcal{M}$ contains only 1.29M parameters per modality, and each ControlNorm adds a mere 0.31M parameters.
In the context of single modality control using ControlNet, the total parameter count is 865.91M + 378.50M. As the number of modalities increases, this overhead grows linearly, reaching a total of 865.91M + $n$ x 378.50M, where $n$ is the number of modalities.
Our proposed gControlNet efficiently handles multiple modalities through a single HyperNetwork. Excluding ControlNorm, our model has a parameter count of 865.91M + 378.50M + $n$ x 1.29M. This formulation significantly reduces the computational requirements in multi-modal scenarios, underscoring the efficiency and innovation of our approach.
#### **Comprehensive Analysis of Results**
We are more than willing to engage in further discussions regarding the metrics employed in our study.
In the main part of this submission, we employed various metrics such as CLIP, FID, and aesthetic scoring to assess the performance of our model. These metrics tend to emphasize the model's ability to generate image. For instance, FID evaluates the fit of the generated image to the distribution of real images, the CLIP Score represents the correlation between the generated image and corresponding text, and aesthetic scores reflect the artistic quality of the generated images.
However, our proposed model aims to enhance controllability while maintaining image generation quality. Therefore, to provide a more comprehensive evaluation of the model's ability to control image attributes, we have introduced a set of additional metrics in the appendix.
These supplementary metrics process the generated images into the given control modality signals, subsequently quantifying the discrepancies between the generated images and the provided signals. These measures are instrumental in substantiating the effectiveness of our proposed solution in enhancing image controllability, aligning our evaluation more closely with the specific goals of our research.
#### **Concerns about reproducibility**
The codes will be released for the reproducibility and publicly available, and the corresponding repo link will be attached in the manuscript for camera ready version.
#### **Conflicting control signals**
When confronted with conflicting signals across different modalities, the model endeavors to interpret each modality coherently. It strives to incorporate all the information conveyed by the different modalities, translating them into conditions for the generated images. The result is a theoretically reasonable representation that balances and harmonizes the conflicting inputs, ensuring that the synthesized images reflect as much of the diverse input data as possible. However, there is a possibility that one modality may overshadow another if its representation is significantly more dominant or contains richer information. In such cases, the more expressive modality may suppress the other, leading to an unbalanced influence on the generated images. This phenomenon might result in the loss of some nuanced details conveyed by the less dominant modality.
We have included some example samples of such situations in the Global Rebuttal attachment.
Finally, we would like to express our sincere gratitude for your thoughtful and comprehensive review.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the rebuttal and clarifying the concerns! | Rebuttal 1:
Rebuttal: We have organized the comments from the reviewers, and below are several issues that have been highlighted by multiple reviewers.
**Clarification of Benefits and Analysis:**
In the principal section of this submission, metrics such as CLIP, FID, and aesthetic scoring were employed to gauge our model's performance. These metrics often stress the model's capacity to generate images. For example, FID measures how closely the generated images fit the distribution of real images, while the CLIP Score represents the correlation between the generated images and corresponding text, and aesthetic scores gauge the artistic quality. The FID score is highly linked to the training dataset, and due to resource constraints, using only a subset of LAION might decrease diversity, leading to a higher FID.
We emphasize that our work focuses on image controllability while maintaining quality. Thus, additional metrics have been introduced in the appendix for a more comprehensive evaluation of control over image attributes. These supplementary metrics process the images into given control modality signals, then quantify the differences between generated images and provided signals. For example, we obtain the HED of the generated image, then calculate its L2 loss relative to the conditional HED input. These supplementary metrics process the generated images into the given control modality signals, subsequently quantifying the discrepancies between the generated images and the provided signals. We repeat our evaluation results as Tables 1 and 2 in the attachment. It can be found that our approach is superior to the baseline in controllability across all trained modalities.
**Computational Complexity and Additional Details:**
Alleviating the substantial overhead of multi-modal control is a primary goal. We're happy to clarify specific details about our model's parameters to illuminate this work's contributions.
Our employed SD base model's parameter count is 865.91M, with a single ControlNet having 378.50M parameters. Our fusion network $\mathcal{M}$ consists of only 1.29M parameters per modality, and each ControlNorm adds just 0.31M parameters.
In single modality control using ControlNet, the total parameters are 865.91M + 378.50M. With an increase in modalities, this overhead grows linearly, reaching 865.91M + $n$ × 378.50M where $n$ is the number of modalities.
Our innovative gControlNet handles multiple modalities via a single HyperNetwork efficiently. Without ControlNorm, the parameters count is 865.91M + 378.50M + $n$ × 1.29M, substantially cutting the computational needs in multi-modal contexts and highlighting our approach's efficiency.
$\mathcal{M}$ is a straightforward convolutional network with eight layers and seven SiLU activation functions, with the last layer initialized to zero. Each $\mathcal{M}$ has 1.29M parameters, and each modality will have a corresponding $\mathcal{M}$, for instance, 3 different modalities in our case result in 3.88M total parameters. It can effectively manage any number of trained modalities but can't handle unknown ones, meaning the model can process subsets and the exact set of training modalities.
Compared with gControlNet without ControlNorm and ControlNet, there will be a $\mathcal{M}$ in the former's training process. This operation might degrade the quality of the generated images without the assistance of ControlNorm. And after passing through $\mathcal{M}$ and gControlNet will there be only a set of control signals injected into the base model while ControlNet or T2I will have multipile set of control signals. In addition, unlike T2I and ControlNet, where different modalities are trained separately, gControlNet co-trained $\mathcal{M}$ with different modalities.
Codes will be made available for reproduction.
**Ethical Considerations:**
We acknowledge your concern about the potential misuse of our model. The images that our model can generate heavily depend on the training dataset used. Currently, our model's training data strictly complies with open-source licenses and has been carefully selected, providing some degree of safeguard against misuse. In situations where the model generates certain objects (including human being), many details can be easily distinguished. Going forward, we are considering the addition of watermarks to the generated images to prevent unauthorized use, along with the implementation of a monitoring network at the output stage. Similar tools are already widely utilized in projects like Stability / DALL.E2, and we believe our model can also benefit from adopting such measures.
**Conflicting control signals**
When confronted with conflicting signals across different modalities, the model endeavors to interpret each modality coherently. It strives to incorporate all the information conveyed by the different modalities, translating them into conditions for the generated images. The result is a theoretically reasonable representation that balances and harmonizes the conflicting inputs, ensuring that the synthesized images reflect as much of the diverse input data as possible. However, there is a possibility that one modality may overshadow another if its representation is significantly more dominant or contains richer information. In such cases, the more expressive modality may suppress the other, leading to an unbalanced influence on the generated images. This phenomenon might result in the loss of some nuanced details conveyed by the less dominant modality. Some samples are demonstrated as Fig.1 in the attachment.
Pdf: /pdf/6036e87b4563935d392b3d25c00ddad635e419f2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes Cocktail, a pipeline to mix various modalities into one embedding. The model is based on a variant of ControlNet and infuses control signals from disparate modalities into the pre-trained diffusion model. It is also equipped with a sampling approach named spatial guidance sampling that constructs the cross-attention weights based on the spatial location. The model can perform text-to-image generation conditioned on multiple modalities and outperforms controlnet in some metrics.
Strengths: Both the proposed controllable normalization and spatial sampling method are novel and have not been previously explored in the context of controllable generation.
Weaknesses: The model does not consistently outperform the two baselines: ControlNet and T2I-Adapter, in text-guided image-to-image translation. The author claimed the model can benefit from mixed training of multiple modalities (line 213-215) but the proposed model shows a mixed result in various metrics across different tasks. Additionally, there is no quantitative comparison in multi-modality conditioning to existing baselines. In spatial guidance, no qualitative metrics showing its effect were given as well. Overall, it is unclear what are the advantages of the proposed method over simple baselines such as controlnet.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. Line 216 - 218, what figures or tables refer to “consistent composition”?
2. Eq 7, what is $\sigma$? How to determine its value?
3. How to determine the correspondence between text and image token?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and observations regarding our model.
#### **Clarify the specific benefits of our proposed method.**
The metrics introduced in the main text focus on the quality of image generation. After training, the quality of the images generated by our model outperformed the baselines in most results, especially in tasks involving Segmentation Map or Pose. However, it is essential to note that the quality of the generated images is strongly related to the training dataset, particularly when it comes to the FID score. And the main objective of our task is to enhance the controllability of the generated images rather than merely improving quality.
In the appendix, we introduced a more effective metric to measure the control strength of our proposed scheme. For the convenience, I have attached the table:
| | Similarity (LPIPS$\downarrow$) | Sketch Map (L2 Distance$\downarrow$) | Segmentation Map (mPA$\uparrow$) | Segmentation Map (mIoU$\uparrow$) | Pose Map (mAP$\uparrow$) |
|---------------------------------------------------------------------|-------------------------------------|--------------------------------------|----------------------------------|----------------------------------|----------------------------------|
| Multi-Adapter | 0.7273 $\pm$ 0.00120 | 7.93310 $\pm$ 0.01392 | 26.30 $\pm$ 0.242 | 13.98 $\pm$ 0.177 | 40.02 $\pm$ 0.761 |
| Multi-ControlNet | 0.6653 $\pm$ 0.00145 | 7.59721 $\pm$ 0.01516 | 36.59 $\pm$ 0.273 | 22.70 $\pm$ 0.229 | 38.19 $\pm$ 0.761 |
| Ours w/o ControlNorm | 0.4900 $\pm$ 0.00141 | **7.18413 $\pm$ 0.01453** | 48.26 $\pm$ 0.287 | 32.66$\pm$ 0.272 | 61.93 $\pm$ 0.775 |
| Ours | **0.4836 $\pm$ 0.00133** | 7.28929 $\pm$ 0.01385 |**49.20 $\pm$ 0.289** |**33.27 $\pm$ 0.271** | **61.99$\pm$ 0.778** |
These metrics employed open-source methods to process the generated images and then compared them with the given conditional information. For instance, we obtain the HED of the generated image and then calculate its L2 loss relative to the HED provided as a conditional input. This approach allows us to focus more on the controllability aspect, which is the primary goal of our study. **It is worth noting that our Cocktail achieves much better quantitative results according to these controllable inputs.**
#### **Line 216 - 218, what figures or tables refer to “consistent composition”?**
This sentence summarizes the experimental section, and in addition to Figures 5 and 6 in the main text, Figures 1 to 7 in the appendix effectively demonstrate that our approach can efficiently blend the control signals with the generated content. Specifically, **Figure 6** in the main text, along with **Figures 1 and 2 in the appendix**, more clearly showcase that *signals from different control modalities can be applied to multiple objects without conflicts, maintaining consistency.*
#### **Eq 7, what is $\sigma$? How to determine its value?**
$\sigma$ is a fixed parameter determined by the scheduler, representing the degree to which noise affects the model at different steps. For a detailed understanding, one can refer to **Equation 16 in Reference [1]**, where $\sigma$ is used as a conventional expression.
#### **How to determine the correspondence between text and image token?**
The question raised appears to be somewhat unclear. The relationship between text and image in the network can be established through a cross-attention map. By modifying the cross-attention layer, there are various methods currently available to alter objects within the generated image. In the validation phase, calculating the CLIP score is a common approach to measuring the similarity between text and image.
[1] Song, J., Meng, C., & Ermon, S. (2020). Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response.
1. Regarding the benefits of the proposed method, I can see the improved controllability from the attached table. However, the increase of FID (Table 1, b + c, ours vs. controlnet) suggests a possibly lower quality of the generated image. In the meantime, a decrease of the CLIP score (Table 1, a+b+c, ours vs. ControlNet or ours vs. T2I-Adapter) suggests a lower controllability through text. This seems contradictory to *signals from different control modalities can be applied to multiple objects without conflicts, maintaining consistency.*, which is claimed in the rebuttal and in line 216-218 of the main paper. The author should better show whether the conflict between the improvement of controllability through pose/edge/segmentation mask and the FID degradation is due to the the flaw of FID computation or is at the price of degradation in image quality, via randomly selected qualitative samples or quantitative metrics based on human evaluation.
2. Regarding determining the correspondence between text and image token, my question is about line 184-186: **if image token $Q_i$ corresponds to a region of the image that should be influenced by text token $K_j$ , $M_{ij}^{pos(n)}$ is assigned the value of 1**. How is **should be influenced** defined?
3. It is clear to me now that $\sigma$ is the noise level per step. It should be indexed by the diffusion timestep and the authors should also clarify that the paper is using DDIM.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response.
1. In response to the comments, we'd like to clarify that the foundational model for our SD is SD2.1, not SD1.5. This model employs OpenCLIP instead of CLIP. Here you could find the FID and CLIP Score comparision between SD1.5 and 2.1 [1,2]. The referenced CLIP Score specifically pertains to CLIP and does not account for OpenCLIP. Additionally, we highlight that to provide a comprehensive assessment and to mitigate any potential misconceptions regarding image quality degradation, we've incorporated both HPS and Image Reward Score. These metrics simultaneously evaluate image quality and the connection between text and image. We acknowledge that the FID metric might not be the most robust for assessing image quality in contemporary generative models (although it is still a good metric for a generative model). And we agree introducing human feedback could indeed offer insights into the final quality. Therefore, we assert that the observed enhancement in controllability via pose/edge/segmentation mask is **NOT** attributed to any shortcomings in FID computation nor does it come at the expense of quality degradation.
2. Here we use **influence** to indicate which areas of the image we want to be responded to the prompt accordingly.
3. Thank you for the clarification. We would modify the manuscript about this part.
[1] https://github.com/Stability-AI/stablediffusion, Fig: https://github.com/Stability-AI/stablediffusion/raw/main/assets/model-variants.jpg
[2] https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0, Fig: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/comparison.png | null | null | null | null | null | null |
A Measure-Theoretic Axiomatisation of Causality | Accept (oral) | Summary: This paper builds a rigorous foundation for causal reasoning. It does so by defining causal space, an extension of the concept of probability space by adding stochastic kernels. This richer structure allows the authors to define notions like interventions and causal effects, and generalize the standard frameworks like structural causal models.
Strengths: It is easy to take for granted the advantages provided by the measure-theoretic foundations of probability theory, but it is undeniable that Kolmogorov's insight to do so put probability theory in a respectable position and more importantly allowed for rapid progress on otherwise impossibly difficult problems related to stochastic process by Doob and others. No doubt this axiomatization did not happen without resistance, and many early probabilists considered it an offense to probabilistic intuition.
This paper is a valiant attempt at axiomatization of causal reasoning, and in my opinion it has a done a fairly good job. The idea of using stochastic kernels to enrich the underlying probabilistic structure for the much more difficult (than plain statistics) problem of causal reasoning is very natural while at the same time difficult to effectuate. The authors do a great job defining all the relevant terms and being rigorous in their treatment.
Weaknesses: The authors do a lackluster job in providing enough motivation and examples, and discussing potential alternative axiomatizations. A first read of the paper gives an impression of pulling the definitions "out of the hat". This is unfortunate since it is important to encourage as many readers as possible to think in this direction and incorporate it in their research.
On lines 137-138, it is stated that "$K_S(\omega, A) = K_S((\omega_S, \omega_{T \setminus S}), A)$ for any $A \in \mathcal{H}$ only depends on the first $\omega_S$ component of $\omega$". I don't think it is true (if it is indeed true, please provide a proof). For example, let $T = \\{1,2\\}, S = \\{1\\}, E_1 = E_2 = \\{0,1\\}$ and $\mathcal{E}_1 = \mathcal{E}_2 = \mathcal{P}(\\{0,1\\})$. Then $\Omega = E_1 \times E_2$ and $\mathcal{H} = \mathcal{E}_1 \otimes \mathcal{E}_2 = \mathcal{P}(\Omega)$. Now we could define $K_S$ in such a manner that it depends on the second component of $\omega \in \Omega$. Say:
$$
K(\omega, A) = \begin{cases}
0 &\text{if } \omega = (0,0), A = \\{(0,0)\\}\\\\
1 &\text{if } \omega = (0,1), A = \\{(0,0)\\}\\\\
\vdots
\end{cases}
$$
A few minor nitpickings:
1. In section 2, where $\mathcal{H}_S$ is defined it will be clearer to remind what measurable rectangles are, since it is implicitly being used that for measurable rectangles $A_t$ differs from $\mathcal{E}_t$ for only finite $t$'s.
2. In definition 2.2, it is misleading to say "(product) probability space". The word "product" when talking about measurable spaces (as opposed to _measure_ spaces) has a connotation of product measures, which is not the case here.
3. I find the notation in equation (2) confusing. It might be helpful to also say that the kernel $K_S^{\text{do}(U, \mathbb Q, \mathbb L)}$ is simply the product $L_{S \cap U} K_{S \cup U}$ of stochastic kernels (as defined by Çinlar on page 39).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: While understanding the definition of causal space, a natural question of what are some sufficient conditions for its existence arises (similar in spirit to the sufficient conditions for the existence of regular conditional distributions). It would be nice to explore this question.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Not relevant.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer very much for the time and effort spent in reviewing our submission, and for the positive evaluation. Your comments make it clear that you understood and agreed with our motivation behind this project, for which we are humbled and extremely grateful. We are also very grateful for the suggestions for improvements, which we will take into account for future versions of this paper. We will do our best to address your concerns below.
**Motivation**
We fully agree that, as currently written, the axioms somewhat seem to be pulled out of the hat. In fact, these axioms were a result of over a year of contemplating different ways of axiomatising causality. Given more space, you are absolutely right that more discussion of the motivation and potential alternative axiomatisations would certainly benefit the paper, and we propose to do so in the future versions of this work.
**Lines 137-138**
The causal kernel $K_S$ is a transition probability kernel from $(\Omega,\mathscr{H}_S)$ into $(\Omega,\mathscr{H})$, so for a given $A\in\mathscr{H}$, the map $\omega\mapsto K_S(\omega,A)$ has to be measurable with respect to $\mathscr{H}_S$ (see definition of transition kernels in the appendix, lines 568-575). If $K_S$ depended on the $T\setminus S$ components, it would violate this measurability condition. We hope we understood your point correctly, and that this addresses your concern.
**Minor comments**
1. We give the definition of measurable rectangles in the appendix on line 523. In future versions of this paper, we propose to move this definition to the main body in the camera-ready version.
2. You are right. We propose to change this, so that we make it clear that only the $\sigma$-algebra is in the form of a product, not the measure. Thank you very much for pointing this out - this is a great spot.
3. Actually (2) is not a product of kernels, because $K_{S\cup U}$ has the $\omega_{S\setminus U}$ component that does not get integrated with respect to the measure $L_{S\cap U}(\omega_{S\cap U},\cdot)$. In the appendix, Remark C.1(b) treats the special case in which the interventional kernel $K^{\text{do}(U,\mathbb{Q},\mathbb{L})}_S$ is a product of kernels.
**Questions**
I'm guessing you mean something like [Cinlar, 2011, page 151, Theorem IV.2.7]? If so, yes, we agree that this is definitely an interesting question to answer in future research. Thank you very much for the suggestion, we really appreciate it!
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. I stand by my high rating of this paper. | Summary: This paper presents a completely novel mathematical description of causal systems, which generalizes the most well-known causal formalisms such as SCMs and potential outcomes. It does so by taking the probability distributions as the foundation, and use this to construct a measure-theoretic approach to causal systems. The result is a very expressive framework, that definitely has its place within the landscape of causal formalisms.
Strengths: The paper is clearly written, but some background in probability theory is required to understand the technical definitions. There is substantial discussion of related work and open questions, illustrations of the framework, and comparisons to SCMs. Overall this is a great paper.
Weaknesses: See questions.
Typos:
58: “we have designated” -> “we have a designated”
81: “implications concepts” -> “implications for concepts”
107: “the measurable space are” -> “the measurable space is”
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I do have some conceptual comments about the motivation and supposed superiority of the framework.
1: In your comparison with SCMs, you only consider hard interventions. What about soft interventions?
2: Remark 3.2:
I find this quite strange. Usually one thinks of an intervention not as an intervention on the system that affects a subsystem, but simply as an intervention on the subsystem itself, and this of course affects the system, just as changes to a part are also changes to a whole. I’m not saying the idea of a “holistic” intervention on an entire system is uninteresting, but merely that this is not usually how causal interventions are understood. In fact, the independent mechanism assumption made by Woodward and taken over by others captures a fundamental modularity that is assumed to hold for causal relations.
Similarly, the idea of coupling the observational and the interventional distributions is a feature, not a bug: it encodes an assumption about how we take the world to work.
In sum, although I agree that it is interesting to decouple these distributions and to view interventions as “system-wide”, as your formalism does, I am not sure when and why we would ever want to do so: at heart, the motivation for your framework seems to come with a philosophical theory that is lurking in the background which appears in conflict with the standard interventionist theory of Woodward that causal modellers usually align with. I realize that it would be too much for one paper to introduce both such a rich framework and dive into a philosophical analysis, but my suggestion would then be to here remain more neutral regarding these commitments.
3: footnote 1: do you think your framework is strictly more expressive than GSEMS? They don’t use probabilities, but perhaps you could reconstruct GSEMS by giving positive support to all values, and reasoning about P > 0?
4: Confounders.
I’m not sure if the authors are aware, but the philosopher Nancy Cartwright has famously argued against the Causal Markov Condition by using a very similar example, the so-called chemical factory. In her case though, she goes even further, stipulating that there is no common cause, not even an unobserved one. If one accepts her stipulation, then CBNs seem indeed unable to express such situations, whereas your framework would have no trouble with it at all. So it might be useful to refer to this discussion. For what it’s worth, the usual reply is similar to the move that you here discuss: simply add a “fictional” exogenous variable that functions as a latent common cause.
This brings me to footnote 2: I do not agree that such a variable is “completely meaningless”. It simply captures our ignorance. As long as one is committed to the Causal Markov Condition, one is committed to the belief that such a latent variable must exist. Whether we call it one variable or several variables collapsed into one seems merely a matter of terminology.
5: “Of course, cycling relationships abound in the real world.”
Really? That is quite a controversial claim: do the authors then subscribe to backwards causation? I’d say that cyclic models are useful abstractions of what in the end are non-cyclic systems. Your example is a case in point: it’s not the current price which causes the current amount of rice, and vice versa, it’s the price right now that determines the future amount.
Moreover, although I’m sure the authors are right that SCMs have trouble in representing certain cyclic cases, is this really one of them? It doesn’t seem so hard to model this with an SCM.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer very much for the time and effort spent in reviewing our submission, and for the positive evaluation. Thank you also for suggestions for improvements; we will reflect them in future versions of this work. We will do our best below to address your concerns in the Questions section.
1. If we understand correctly, you are referring to Section 3.1, lines 214-215. This is not referring exclusively to hard interventions, it is simply constructing the causal kernels. Once these causal kernels are constructed, by placing a non-Dirac measure on the sub-$\sigma$-algebra on which we want to intervene, and also by utilising the internal causal mechanisms $L$, we can then formulate soft interventions. For future versions of this work, we propose to spell this out more explicitly.
2. Actually the point we were trying to make was precisely in agreement with your view - that we are most often interested in intervening on subsystems, rather than the whole system. Our causal kernels capture precisely this information, i.e. the effect of intervening on subsystems on the whole system. The point we were trying to make was that in the SCM formulation, the mathematics seems to go in the reverse direction to this philosophy, because each structural equation $f_j$ encodes information about the effect of intervening on the whole system on the subsystem $X_j$. Our causal space formalism does *not* view interventions system-wide - each causal kernel encodes information about intervening on the corresponding subsystem (sub-$\sigma$-algebra). In this sense, we are in complete agreement in terms of the philosophy (with Woodward, and that behind e.g. the SCM framework), we just differ in how to encode this mathematically.
As for the decoupling of observational and interventional measures, we maintain that it is an advantage to be able to do this for the sake of generality. In order to be able to couple them, as it is done in SCMs, we have to impose the assumption that all of the variables have been included in the model that are required to do this (see Example 4.1). We do treat the case in which observational measure and the interventional measure (at least partially) match as an important special case - see Appendix D on the concept of sources that we introduce there.
3. Some philosophy behind causal spaces is shared with GSEMs in the sense that GSEMs also encode information about what happens to the whole system when intervening on a subsystem. However, there are some technical hurdles to be overcome if one wants to actually reconstruct a GSEM from a causal space. First of all, they consider power sets of the range rather than $\sigma$-algebras, which, if the variables are allowed to take uncountably many values, causes measure-theoretic problems. Moreover, as you rightly point out, for a given probability measure on the exogenous variables and an intervention, there seems to be no way of propagating this measure on the exogenous variables to sets of values of the endogenous variables.
In the sense that GSEMs cannot do probabilities and causal spaces are designed explicitly with probabilities in mind, we believe that causal spaces strictly generalise GSEMs. We also do not believe that they worked with power sets rather than $\sigma$-algebras for any deep reason, and if we wrote out the GSEM formulation with $\sigma$-algebras, then indeed, as you suggest, we believe it would be possible to reconstruct GSEMs from causal spaces by letting the function $\mathbf{F}$ map to the support of each intervention. However, for reasons laid out above, there would be no way of uniquely reconstructing causal spaces from GSEMs.
4. This would be very valuable addition to our work, and we are very grateful to you for this comment. We absolutely propose to add a discussion on this example in future versions of this work.
As for footnote 2, we agree with you that the variable itself is not meaningless. The variable of course can have meaning as a way of capturing our ignorance, precisely as you said. Our point was that such a variable cannot have numerical values and distributions that represent something meaningful. What does it mean for a variable that we name "everything in the world that we do not know about" to have value 1? What does it mean for it to be normally distributed? Properly defined random variables should be more concrete, e.g. height of a student in cm, temperature in Celsius, crop yield in tonnes, etc.
5. Thank you for raising this point. Please see author rebuttal to all reviewers.
We thank you again for your positive review and suggestions, and should you have any more questions or if we misunderstood any of your points, we would be delighted to engage in the author-reviewer discussions.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal. Thanks for the clarifications! | Summary: The paper develops new foundations for causality based on the measure-theoretic foundations of probability pioneered by Kolmogorov. Interventions are formalized as transition kernels and it's shown that the new approach is more general than existing ones in a number of respects.
Strengths: The paper is clearly laid out, sound, and represents a noteworthy (and as far as I know, novel) attempt to do causality in terms of mathematical probability theory. I believe the most promising direction in terms of utility is in being able to handle continuous-time causal stochastic processes, which have so far resisted formalization; the approach here does not even rely on dynamical systems.
Weaknesses: I raise some edits and additional citations below.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Line 49: Why is there no citation to Ibeling & Icard's 2020 "Probabilistic Reasoning across the Causal Hierarchy"? This paper was (one of the, if not the) first to axiomatize SCMs in a truly *probabilistic* setting. The references [18, 20] there are still only quasi-probabilistic. (Admittedly, the paper I suggest does enforce an acyclicity constraint, but it is no better or worse than the other two references in this regard.)
- Line 296: Citation 20 probably should also be thrown in here.
- Can the rice example be modeled within the space of SCMs that always have unique solutions? I can't see why not. In this particular case, although the framework here may be more natural, it may not add much that is substantive.
- Example 4.2: notations K_1(3, x) and K_2(6, x) seem inconsistent with each other since the interventions are on different variables.
- Can the authors comment on whether the framework given invalidates quintessential principles of SCMs, e.g., effectiveness, composition, and reversibility? I am guessing the answers are no (by interventional determinism, line 158), yes, yes. The broader point is that not only do SCMs require additional assumptions to be well-defined, but that these assumptions may also reverberate in causal reasoning.
- Line 328: Z = {..., 2, 1, 0, 1, 2, ...} should read Z = {..., -2, -1, 0, 1, 2, ...}
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, they have.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent in reviewing our submission, and for the overall positive evaluation. We also thank you for raising interesting points and suggestions for improvements, which we will take into account in future versions of this work. Below, we will try our best to address your concerns and queries.
**Line 49**
This is a very reasonable citation recommendation. We had of course read the paper during the project, but inexplicably left out its citation in our submission. We will make sure to include it in future versions of this work. Thank you very much for pointing it out.
**Line 296**
Agreed - thank you very much!
**Rice Example**
Thank you for making this point - please see author rebuttal to all reviewers.
**Example 4.2**
Thank you for pointing this out. We will change one of the $x$'s into a $y$ for better notation.
**Principles of SCMs**
By effectiveness, composition and reversibility, we assume that the reviewer is referring to the notions introduced under these names in [Galles and Pearl, 1998, An Axiomatic Characterisation of Causal Counterfactuals, Section 3]. This is a very interesting point to consider, and we are very grateful to you for raising it.
Firstly, even though the concepts of effectiveness, composition and reversibility can be carried over to causal spaces, the mathematics through which they are represented needs to be adapted, since the tools that are used in causal spaces are different from those used in causal models of [Galles and Pearl, 1998]. In particular, we work directly with *measures* as the primitive objects, whereas [Galles and Pearl, 1998] use the structural equations as the primitive objects, and the probabilities only enter through a measure on the exogenous variables. Thus, the three properties can be phrased in the causal space language as follows:
Effectiveness: For $S\subseteq R\subseteq T$, if we intervene on $\mathscr{H}_R$ via a measure $\mathbb{Q}$, then $\mathscr{H}_S$ has measure $\mathbb{Q}$ restricted to $\mathscr{H}_S$. This is indeed guaranteed by interventional determinism (Definition 2.2(ii)), as you said.
Composition: For $S,R\subseteq T$, denote by $\mathbb{Q}'$ the measure on $\mathscr{H}_{S\cup R}$ obtained by restricting $\mathbb{P}^{\text{do}(S,\mathbb{Q})}$. Then $\mathbb{P}^{\text{do}(S,\mathbb{Q})}=\mathbb{P}^{\text{do}(S\cup R,\mathbb{Q}')}$. In words, intervening on $\mathscr{H}_S$ via the measure $\mathbb{Q}$ is the same as intervening on $S\cup R$ via the measure that it would have if we intervened on $\mathscr{H}_S$ via $\mathbb{Q}$. This is not in general true, as you said. A counterexample can be demonstrated with a simple SCM, where $X_1$, $X_2$ and $X_3$ causally affect $Y$, in a way that depends not only on the marginal distributions of $X_1$, $X_2$ and $X_3$ but their joint distribution, and $X_1$, $X_2$ and $X_3$ have no causal relationships among them. Then intervening on $X_1$ with some measure $\mathbb{Q}$ cannot be the same as intervening on $X_1$ and $X_2$ with $\mathbb{Q}\otimes\mathbb{P}$, since such an intervention would change the joint distribution of $X_1$, $X_2$ and $X_3$ even if we give them the same marginal distributions.
Reversibility: For $S,R,U\subseteq T$, let $\mathbb{Q}$ be some measure on $\mathscr{H}_S$, and $\mathbb{Q}_1$ and $\mathbb{Q}_2$ be measures on $S\cup R$ and $S\cup U$ respectively such that they coincide with $\mathbb{Q}$ when restricted to $\mathscr{H}_S$. Then if $\mathbb{P}^{\text{do}(S\cup R,\mathbb{Q}_1)}(B)=\mathbb{Q}_2(B)$ for all $B\in\mathscr{H}_U$ and if $\mathbb{P}^{\text{do}(S\cup U,\mathbb{Q}_2)}(C)=\mathbb{Q}_1(C)$ for all $C\in\mathscr{H}_R$, then $\mathbb{P}^{\text{do}(S,\mathbb{Q})}(A)=\mathbb{Q}_1(A)$ for all $A\in\mathscr{H}_R$. We again agree with you that this does not hold in general in causal spaces. In fact, Example 4.2 in our paper is a counterexample of this, with $S=\emptyset$.
So all in all, we agree with your answers of no, yes, yes, as to whether the causal space framework challenges these fundamental properties of SCMs. Composition is an interesting one, because even in the SCM setting, it is very clear that it does not hold if we are interested in the *measure* of the target variable, not just the pointwise evaluation of the structural equations. This is a very interesting discussion, and we propose to include it in future versions of this work. Thank you very much again for raising this point!
**Line 328**
This is a terrible typo. Thank you very much for pointing it out.
We thank you again for the review, positive evaluation and valuable suggestions for improvements. We hope to have clarified your questions, and should you have any further questions, or if we misunderstood any of your points, we would be happy to engage in the author-reviewer discussion.
---
Rebuttal Comment 1.1:
Comment: Thanks, the axiomatic discussion is interesting and of course I agree it should be included. To the extent that a completeness proof could be given it might shed light on the question about the rice example (namely, if these principles are complete and the rice distribution obeys them, then there must exist a cyclic SCM modeling the distribution even if it seems difficult to model). I maintain my overall positive rating. | Summary: This paper proposes a measure-theoretical axiomatization of causalality with the notion of causal space and with a collection of causal kernels encoding the causal information.
Strengths: The paper offers an axiomatization of causality which is based on the measure-theoretical foundation of probabaility theory.
Weaknesses: (1) It seems that Definiiton 2.2 is not well-defined because, given a subset S of T, we cannot decide the causal mechanism K_S. Also see my questions below.
(2) I really could not understand the claim (Line 147) that "intervention is the process of placing any desired measure,... along with an internal causal mechanism ...". Then it seems that intervention can also be treated as conditioning.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: (1) I am confused by Definition 2.2 (Causal space). For a given S, the causal mechanism seems undecided: there are many causal mechanisms satisfying the two axioms. For two subsets S1 and S2 of the T, What is the relationship among K_{S1}, K_{S2} and K_{S1\cup S2}?
(2) It seems that all varaibles are independent in Definition 2.2. How can one formulate the relationship that variable X causes another variable Y in the causal spapce?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our submission. We regret that you have a rather negative view of the paper, and we hope that our clarifications below, as well as the other reviews, can give you a more positive view.
**Definition 2.2**
This is the main definition of the paper, containing the two axioms of causal kernels. We regret that you were not convinced by these axioms of causal spaces. We'll do our best to address your concerns.
For a given $S$, of course there are many possible causal kernels $K_S$ that satisfy the two axioms, but in the end the causal space will have one $K_S$ specified. This is the same in SCMs, where, given a random variable (or a node), there are many possible functions that can act as the structural equation for this variable, but in the end the SCM will specify one function. Or even just in probability spaces, where of course there are many possible probability measures that satisfy the axioms of a probability measure, but a given probability space will have one measure specified.
For two subsets $S_1,S_2$ of $T$, there is a priori no relationship between $K_{S_1}$, $K_{S_2}$ and $K_{S_1\cup S_2}$, and for full generality, this is desirable, since, even in the SCM framework, we can easily construct examples where specifying $K_{S_1}$ and $K_{S_2}$ does not fully determine $K_{S_1\cup S_2}$.
We are not sure what you mean by the variables being independent in causal spaces, but it is easy to formulate a two-variable situation in the causal space framework with $X$ causing $Y$. This is precisely what was illustrated in Example 2.5.
**Line 147**
The fact that we take this view on intervention is not a "claim", but the philosophy behind our definition of intervention, and it agrees with most other definitions of intervention in the literature, including the SCM case. We pick a subset of the variables on which we would like to intervene (most often a single variable), and we place a desired measure on those (that) variable(s) (for hard interventions, a Dirac measure). The causal components in the frameworks (the structural equations in SCMs, the causal kernels in our causal spaces) then determine what happens to what we did not intervene on.
In our causal space framework (and in any other framework), intervention is categorically not the same as conditioning - in fact this is the whole point of having any theory of causality. In examples throughout our submission, there are many instances in which intervention is not the same as conditioning, showing that our causal space framework accommodates this. For example, in Example 2.5, conditioning on temperature is not the same as intervening on temperature, and in Example 4.4 and Figure 5, conditioning on the Brownian motion to have a particular value at a particular time point is not the same as intervening on the Brownian motion to have a particular value at a particular time point.
We thank you again for your review, and we hope that our rebuttal, along with the opinions of the other reviewers, are sufficient to convince you at least of the validity of causal spaces, and if possible, their merits. Please let us know if you have further questions, or if any of our clarifications were unclear.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer xRcV,
You have been the most negative reviewer for this paper. Can you please respond to the authors rebuttal, and also take the other reviews into account?
Thank you, AC | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their time and effort in reviewing our paper, and for making many valuable suggestions that are sure to improve the draft. We are very grateful for your time, and do not take it for granted. We are also very grateful for the mostly positive reviews, and kind words. We answer points raised by individual reviewers separately, except those relating to cycles and Example 4.2, which we answer jointly here.
**Do cyclic causal relationships truly exist, or is everything acyclic if we include a time component?**
As challenged by reviewer TfjY, it is difficult to think of truly instantaneous cyclic causal relationships (unless they are random variables that do not live in separate components of a product $\sigma$-algebra; see Remark 2.7(i). For example, if $X$ and $Y$ represent the weight of a box in kg and stones respectively, then one could argue that $X$ "causally" affects $Y$ and vice versa, and that this causal effect is instantaneous. However, in our opinion, it makes no sense to talk about the causal effect between random variables that do not live in separate components of a product $\sigma$-algebra. In SCMs, or in any other causal model based on graphs, random variables represented by each node should live in separate components of a product $\sigma$-algebra. Note that this is different to when two random variables do live in separate components of a product $\sigma$-algebra but a measure *happens* to be supported on some kind of a diagonal).
We believe that even in non-cyclic situations, most causal relationships have a hidden time component, although one could argue that some causal effects really are instantaneous, for example, altitude causally affecting temperature, or the picture of a cow causally affecting its label to be "cow". Our stance is more aligned with reviewer pcib, in that modelling cyclic causal relationships are useful abstractions of what are in the end non-cyclic causal relationships. We also believe that it is extremely difficult to argue against the necessity of such an abstraction, if researchers are happy to model non-cyclic, time-dependent causal relationships without explicitly modelling the time component (as it is in fact very often done) and if we believe in the necessity of being able to do so. It is in this sense that we wrote "cyclic causal relationships abound in the real world", although, as pointed out by reviewer pcib, this sentence may be somewhat misleading. We propose to replace it with something that better reflects the discussion in this paragraph. We also do not want to completely rule out the possibility that one might come across a truly instantaneous and cyclic causal relationship, although we are leaning more towards the belief that such a situation may not exist.
On the other hand, if one were to insist on modelling the time component explicitly every time it is present (which is not our stance), then we argue that the current tools to model causality within stochastic processes fall short, and we believe that causal spaces make significant contribution in this regard too, as argued in Section 4.3 and as agreed by reviewer aMnb. In answer to reviewer pcib, no, we do not subscribe to causation that goes backwards in time, and it is for this reason that we felt it important to introduce the concept of *time-respecting causal mechanism* in Definition 4.3.
**Example 4.2**
Reviewers aMnb and pcib suggested that it would not be difficult to model the situation in Example 4.2 with a solvable cyclic SCM [Bongers et al., 2021]. To us, this was difficult, and we didn't manage to do so, both when we were writing the paper and during this rebuttal period. One of the reasons is that the observational measure and the interventional measures are decoupled, and the interventional measure (after intervening on either variable) does not equal the corresponding conditional measure, which means that we must include a common hidden confounding variable. But even apart from this, it seems like a difficult task (at least for us; perhaps we lack the skills or experience for it) to set up the noise variables and their distributions, and to come up with structural equations that yield given desired observational and interventional measures. The example given in [Bongers et al., 2021, Example 3.5] seems like an extremely simple case. We admit that it could still be possible and that we just lack the skills to do it; equally, we did not show that this was *not* possible, and we would be curious to know if the reviewers were able to do this.
But it should at least serve to illustrate one point - that in SCMs, if we start with noise distributions and structural equations that involve cycles, then it is highly non-trivial to show the existence and uniqueness of a solution, and if we start with the observational and interventional distributions of the endogenous variables, then it is (at least for us) very difficult to find the noise distributions and the structural equations that yield them as a (unique) solution. On the other hand, we argue that causal spaces facilitate a much more natural expression of cyclic causal relationships.
We would like to thank all reviewers again for their time, and if you have any further questions, or if any of our explanations were unclear, or if we misunderstood any of your points, we would be very grateful if you could let us know. We look forward to engaging in further discussions in the author-reviewer discussion period. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: (Preamble warning: given the amount and time I had to review Neurips papers, and given other duties, I focused solely on the main content of the paper, not consulting the appendices that were actually longer than the paper.)
The paper proposes and discusses a new framework, based on measure-theoretic probabilities, to encode causal systems, both on the observational and interventional level.
This is done through the use of probability kernels and probability measures defined over Cartesian products of spaces. The framework generalizes a number previously proposed frameworks,
Strengths: * A unifying framework relying on measure-theoretic probabilities that is well-justified from a mathematical point of view, and can be summed up by two technical constraints on the system.
* A quite well-written paper, with illustrative examples showing how the framework can be applied.
Weaknesses: * The authors make bold claims on probability theory: while it is quite okay to rely on measure-theoretic probabilities to derive a framework, I would argue that claiming that this view of probability is "near-undisputed and universally accepted" (P1, L24-25) and that "In ... probability theory, one starts by assuming the existence of a probability space" (P3, L101-102) is not true in the modern world. For instance and only to mention a few works, De Finetti betting accounts of probabilities allow for finitely additive probabilities, and does start by defining a probability space on a $\sigma$-algebra. Similarly, Shafer/Vovk account of probabilities from a game-theoretic point of view does not start from a measure-theoretic view point. Finally, one could also argue that causality could be seen through the lense of other uncertainty theories, and that the unversality of probability theory could be challenged itself (see works of Peter Walley on imprecise probabilities extending De Finetti's betting argument, and works of followers). So even if I think that the adopted viewpoint here is reasonable, I would avoid suggesting that it is the only one that can be considered, or that it can hardly be questioned.
* Sufficiency of the proposed axioms: the axioms are given as mathematical constraints over the system, and makes sense in a fully probabilistic setting (it is less clear if ones consider more expressive languages than probabilities). However, such kinds of axioms (unchange with respect to no new information + conformity with imposed constraints on the system) can also be found in other settings such as, e.g., Jeffrey's updating rule in the presence of uncertain information. I somehow miss a discussion of what makes those axioms peculiar to the situation of causalilty? Or does it follows from the assumptions that the probabilistic kernels DO model an exisiting cause rather than something else? And in this later case, one could imagine a situation where specified kernels do not encode actual causality relationships, while still following the proposed axioms. My feeling is that the axioms concern more mathematical properties ensuring that the system will behave well, rather than axioms ensuring that the encoded concepts and systems will indeed be causal in nature.
* Not clear whether more generality gives a better means to identify causation: In Example 4.1. I appreciate the fact that the proposed model is much more flexible than SCM, however this example raises the question of knowing whether the system is not to general to be able to identify causalities? More precisely, assume we observe the S-I correlation, as well as measure the temperature. If we focus on the SI, we then have two models that perfectly explains the data: one with no causation and observed correlation, and one with causation (where one would have to specify the causation mechanisms). In contrast, SCM would not be able to account for S-I correlation without assuming causaition, and would incitate the modeller to add potential explaining causations. So it is unclear whether the provided generality comes with better capacities of identifying potential causations in practical modelling situations?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Questions:
* P6, 240-243: what is meant by unique existence? Is it meant that a given observational/interventional system can only be represented by a unique set of measures and kernels? The exact meaning of uniqueness is unclear to me here.
* In example 4.2., I agree that one variable can influence the other (in a causal way), but I would argue that we are also faced here with a dynamical process (rice production can only be impacted by price for the next period of rice harvesting, and vice-versa). Would it be possible to provide an example of cyclicity where we are not faced with time-evolving processes?
* P8, L318: could you give some references to identify "by some authors"?
* Comutability: one question that arises from the presented framework (but is probably to be adressed in future work) is about the computability of the presented framework? Maybe it would be good to mention a few cases where this is achievable, if authors have already identified such cases.
* P9, L359-360: it is not entirely clear to me how the proposed axiomatization and the measure-theoretic view of probabilities are linked in the proposed framework. To be more precise, my feeling is that the two proposed axioms of no action-no causation and of agreement witht he causal mechanisms are not especially linked to a measure-theoretic view, which offers the technical tools to derive it. For example: to which extent do we really need $\sigma$-additive measures rather than finitely additive ones to express these two axioms? Of course the first option is technically/pragmatically more simple to deal with, but I am not completely convinced that this is at the heart of the expressed axioms.
Suggestions:
* Figure 1: suggesting that probability theory is confined to modelling data-generating process is a bit misleading, as a Bayesian setting allows one to put probabilities on any quantity of interested (including those not generated by data).
* Given this absence of discussions about the philosohical concepts of causality and how these axioms encode them, speaking of "Axiomatization of causality" seems a bit strong to me. Why not referring in the title to "Generalised causal model through measure-theory"?
* P3, L108-113: the vocabulary used here as well as the various differentiation made is a bit ambiguous: why speaking of "chains of trials" that make one think of repeated experiments when in all further examples all measurable spaces belong to distinct variables?
* P5, L195-196: an underlying assumption here is that the causal kernels are 1. perfectly specified and 2. encode causal relationship. In practice, nothing prevents those two points to be false, and I think it should be mentioned somewhere that those assumptions are made.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations were adequately adressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort in reviewing our submission; we are very grateful for your overall positive evaluation of the paper and your valuable suggestions. We will do our best below to address your concerns and queries.
**Weaknesses**
Thank you for this point. We are aware that other theories of probability exist other than Kolmogorov's. Of those you mentioned, we are aware of de Finetti's approach, and are vaguely aware of Shafer/Vovk's game theoretic framework. We were not aware of Walley's work - thank you for pointing it out. There is also an approach more amenable to Bayesian probability ([Jaynes and Bretthorst, 2003], or https://en.wikipedia.org/wiki/Cox%27s_theorem). We acknowledge that these works are valuable, and we propose to make more of an effort to acknowledge them. We also propose to water down our language from "near-undisputed and universally accepted". However, we believe it is equally unreasonable to treat Kolmogorov's framework as simply one out of many on the same level of significance, given that the vast majority of research in probability theory, as well as the vast majority of textbooks and lecture courses on probability theory, is built on it.
We agree that the two axioms we provide are a priori mathematical properties that are not in themselves causal in nature. However, this is also true for probability spaces - they are simply measure spaces where the whole set happens to have measure 1. Same for densities - they are simply functions that happen to integrate to 1. Yet, we give these objects a probabilistic interpretation. Our axioms of causal kernels should be viewed in the same light.
Thank you for raising this point. We agree that in general identification is difficult, if not impossible, with such generality. However, we would like to point out that if measuring temperature is possible, and if it is in our interest to do so (for identification or any other reason), then our causal space framework accommodates that, so in this sense, nothing is "lost" compared to the SCM framework. We maintain that this is advantageous in the modelling perspective over SCMs, where the researcher is "forced" to include other variables that explain the correlation. For identification from data in our framework, Section D in the Appendix is dedicated to it, via a concept which we call "sources".
**Questions**
P6, 240-243: Uniqueness here is meant simply to contrast with SCMs, where, without the acyclicity assumption, there may be many distributions over the endogeneous variables that satisfy the structural equations and the noise variables (see, for example, [Bongers et al., 2021, Foundations of Structural Causal Models with Cycles and Latent Variables]). In causal spaces, the observational and interventional distributions are always uniquely defined.
Example 4.2: Thank you for raising this point. Please see author rebuttal to all authors.
P8, 318: For example, [Peters et al., 2017, Elements of Causal Inference, Remark 6.5]. We agree that we should give some citation here, and we propose to do so in future versions of this work.
Computability: This is not a question that we had thought to address in this submission at all, but we agree that this is a crucial and interesting research direction. We are grateful to you for pointing it out.
P9, 359-360: We agree that the ideas behind the two axioms we give are not inherently measure-theoretic, and that it would probably be possible to endow other frameworks (e.g. graphical models) with these concepts. However, as you point out, the way these axioms represent these concepts are measure-theoretic, since the transition kernels are very much measure-theoretic tools.
**Suggestions**
Figure 1: In terms of what is included in the outcome space, the parameters in Bayesian setting could also be viewed as "data". But we see the point you are trying to raise. What would be your suggestion for the left-hand box?
The focus of this work was not the philosophical discussions around causality on the level of a philosophy paper, which we hope you understand. However, we maintain that calling our proposal an "axiomatisation" is justified in the mathematical sense, in that we lay out what we assume to be true about causal spaces, and we build subsequent definitions and results on these axioms. This is precisely in the form of other "axiomatic frameworks" in mathematics, not least the probability axioms of Kolmogorov. Moreover, throughout the text we clearly state what philosophical approach behind causality we take, namely that we view it as a study of what happens when we intervene, and in Remark 2.4, we discuss precisely what our axioms encode, so we do not fully agree with your judgment that there is an "absence of discussions about the philosohical concepts of causality and how these axioms encode them".
P3, L108-113: Thank you for this suggestion, yes, we agree that "chain of trials" is somewhat misleading. The terminology is taken from [Cinlar, 2011, Probability and Stochastics, page 161]. Of course, the mathematics $(E_t,\mathcal{E}_t)$ allows for distinct variables even here. We propose to change it to "set of trials", would you agree that this is better?
P5, L195-196: We would again like to draw parallels with probability spaces, where, of course, it is equally impossible to perfectly specify the probability measure in practice for any given problem, but we still do so in the modelling process. In the same vein, we feel it is justified to assume that for the causal kernels. We will make this point more explicit.
We thank you again for your positive review, and raising many interesting points. If accepted, your suggestions are sure to improve our paper. If any of our answers were unclear or we misunderstood your point, please let us know. We would be happy to engage in further discussions.
---
Rebuttal Comment 1.1:
Title: Thank you for the discussion
Comment: Dear Authors,
Thanks for the nice answers and discussion.
About the little details:
* P3, L108-113: I would say that the name trial evoke repeated experiments in my mind, and I am not sure it quite fits the proposed framework. However, if this is the vocabulary used in some previous similar settings, I have no strong arguments against it.
* Figure 1: I would say that if one has graphical models and subjective trends of probabilities in mind, one could probably replace data by evidence and data-generating process by knowledge model or something similar (population model?). However, I also have no strong oppinion aobut that, it is just that confining probability and causality to data and data-generation process (which suggest a frequentist flavour) seems a bit limiting (yet sufficient anyway to encompass most if not all ML problems).
* About the measure-theoretic approach and the axiomatic: thank you for the discussion here, and I somehow agree with the points made about the use of axiomatisation and the fact that Kolmogorov measure-theoretic approach is the most used and wide-spred. However, an approach/intrepretation being the main one is not really an argument to support the fact that it is the most relevant one to treat/view a given problem. And I would argue that, in the case of causality, having a clear semantic/interpretation of the probabilities and probabilistic model used seems important, hence my hope for such a discussion. I am quite fine with purely formal/mathematical characterisation theorems, and I do appreciate the work presented here. I guess more philosophical discussions can be postponed to future times.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Dear reviewer TfjY,
Thank you for your thoughtful response to our rebuttal.
We agree that the word trial does make one think of repeated experiments. We'll try to think of some other wording that is more suitable. Perhaps occurrences?
We also agree with the second point, and we think "population model" is a great suggestion, and certainly sounds more fitting than the data generating process, especially if we do not restrict ourselves to the frequentist mindset. We'll make this change in the future versions of this work.
Thank you for providing further discussion on this important point. We agree that, if one wants to extend a theory of probabilities to a theory of causality, then one must carefully consider the merits of each framework of probability for that goal, and Kolmogorov's framework shouldn't be the automatic choice purely on the basis of its widespread acceptance. We must admit that we have not given this as much deliberation as we perhaps should have done, and as written in the original rebuttal, we propose to give more of a discussion on this point. Thank you also for reiterating your appreciation of our work, we don't take it for granted and we are hugely grateful. | null | null | null | null | null | null |
Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-Series | Accept (poster) | Summary: This paper investigates the contrastive learning for medical time series and develops multiple contrastive objectives at different levels (patient, trial, observation, sample). The downstream applications is the classification on EEG, ECG, EMG, EOG signals. The proposed method has obtained the best or competitive performance on two EEG datasets and one ECG dataset.
Strengths: - It is interesting to see a hierarchical view for a specific type of medical time series, such as ECG and EEG. This hierarchical multi-level view captures the unique characteristics of this type of time series.
- Given this novel view, the proposed contrastive learning at all levels is straightforward and reasonable.
- The paper is well written and easy to understand.
Weaknesses: - My first concern is the discrepancy between the title and the actual methodology. "multi-level contrastive learning" sounds a better choice than "multi-granularity representation learning" because event at the patient or trial level, the representation learning still happens at the sample-level input, and you do not squeeze or aggregate representations at higher levels.Thus, I feel there is no multiple granularity. Please correct me if I misunderstand your method.
- The second concern is that the proposed method is only evaluated on limited datasets. It seems that the method only works on EEG-related datasets (AD and TDBrain), beating other baselines significantly, but is modest on PTB, an ECG dataset. I would like to see broader empirical demonstrations and analyses.
- Moreover, the ablation tests are only conducted on the AD dataset, on which the proposed method has achieved the most performance improvements. I would like to see more detailed ablation tests and analyses on other datasets.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Check the weakness part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Lack of sufficient verifications on large-scale and more diversified datatsets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are happy that you are interested in our paper and feel our paper is well-written and easy to understand. Thank you very much for your three valuable concerns! Here we respond in detail to each of your concerns. If you do not feel we have sufficiently justified a higher score, please let us know where we can improve our work and your further concern. Thank you again!
***
**Q1**: My first concern is the discrepancy between the title and the actual methodology. "multi-level contrastive learning" sounds like a better choice than "multi-granularity representation learning" because even at the patient or trial level, the representation learning still happens at the sample-level input, and you do not squeeze or aggregate representations at higher levels. Thus, I feel there is no multiple granularity. Please correct me if I misunderstand your method.
**A1**: Thanks for bringing up this concern. We acknowledge that we do not compress or aggregate representations at higher levels; the representation learning consistently occurs at the sample-level input.
However, our objective is not to acquire patient or trial level representations. Instead, the various granularities serve to construct positive and negative pairs for contrastive learning. By harnessing patient and trial level information, we extend our representation learning beyond conventional instance discrimination methods. This approach facilitates capturing consistencies across samples (instances) through a self-supervised framework. In the original submission, we have taken cognizance of your concern and examined the difference between our method and multi-granularity approaches in other domains. Please refer to Appendix E.2 for a detailed analysis.
Moreover, we appreciate your attention to the paper's title. We will thoroughly consider whether to modify it in the camera-ready version. Indeed, we deliberated on the paper's title extensively. Alternately, we contemplated a title such as "Hierarchical Contrastive Learning." What are your thoughts on this title?
***
**Q2**: The second concern is that the proposed method is only evaluated on limited datasets. It seems that the method only works on EEG-related datasets (AD and TDBrain), beating other baselines significantly, but is modest on PTB, an ECG dataset. I would like to see broader empirical demonstrations and analyses.
**A2**: Thank you for raising this concern. We add one more large-scale ECG dataset (PTB-XL; 17596 patients, 191400 samples, 5 classes). See R-table 3 for experiment results. Our method performs better in 11 of 12 tests than the other three SOTAs. Particularly with label fraction 1%, we outperform SOTAs by about 5% F1 score and 3% AUROC, demonstrating the effectiveness of contrastive pre-train of COMET in reducing the dependence of labeled data.
***
**Q3**: Moreover, the ablation tests are only conducted on the AD dataset, on which the proposed method has achieved the most performance improvements. I would like to see more detailed ablation tests and analyses on other datasets.
**A3**: Thank you for raising this concern. We add the ablation study on the TDBrain and PTB. See R-table 1 and 2 for experiment results.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: "Hierarchical contrastive learning" better aligns with the underlying approach of this paper. This paper has nothing to do with multi-granularity representations. Using "multi-granularity representation learning" can be very misleading.
More datasets and more ablation results look good.
I would only like to raise the score to "weakly accept" because the current version needs to be modified a lot to fit for the new theme, "hierarchical contrastive learning".
---
Reply to Comment 1.1.1:
Title: Thanks for the reply
Comment: Thank you again for improving the score! Thank you for your constructive suggestion on our paper’s title. We will change our title to “Hierarchical Contrastive Learning for Medical Time-Series.” Besides, we will thoroughly modify our terms/notations/figures/text to keep the paper consistent and easy to follow. For example, we plan to update all “granularity” into “level” to reduce the potential misunderstanding caused by “granularity” in the camera-ready version. | Summary: This paper proposes a multi-granularity contrastive learning method (named COMET) on medical time-series data. The methods build postive and negative pairs from patient-level, trail-level, sample-level, and observation-level. The proposed method is evaluated on two EEG, and one ECG dataset.
Strengths: 1. The paper is well organized and clearly written. Figure 2 is very informative. Related works are also comprehensive.
2. The performance of the proposed method is decent based on the experimental tables.
Weaknesses: 1. The proposed method is not new and many multi-granular contrastive learning models have been proposed in the past few years. Based on Figure 1, CLOCS, TS2Vec, TS-TCC are also multi-granular contrastive learning methods specifically in medical timeseries domain. The contribution and extension (from existing two granular methods to the proposed four granular method) of this paper seems trivial given these exisiting methods.
2. Overall, the proposed COMET method can be viewed as "a more finer-granular way of doing data augmentation". In that sense, the data augmentation methods for all models (including baselines) seem different, then does the experimental design fair?
3. The considered two EEG and one ECG datasets all seems small in medical timeseries domain. What is the number of parameters in the model? Usually, the training data size is expected to be 10 times larger than the number of parameters. Contrastive learning methods naturally need a lot more unlabeled data to train. SHHS is a large EEG dataset, and the ECG 2020 challenge datasets are also large https://physionet.org/content/challenge-2020/1.0.2/.
4. Although the authors have gave another example on satellite sensor applications, the generalizability of the proposed model is still questionable. Given that many multi-granular contrastive learning models have been developed in other domains as well, could the authors distinguish this paper and highlights the key features of COMET if generalizable to other domains?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your thoughtful feedback and happy that you feel our paper is well-organized and the figure is informative. The following response to the questions about our method's weaknesses.
***
**Q1**: The proposed method is not new since many multi-granular contrastive learning models exist.
**A1**: We agree that others have implicitly utilized multi-level information, albeit within highly specific models tailored to different data types. In contrast, we introduce a general framework that applies to all types of medical time series data. We explicitly present the concept of multi-granularity in the context of contrastive learning.
After reviewing numerous existing contrastive learning methods within the time series domain, we consistently posed a pivotal question to ourselves: Can we design a straightforward yet appliable contrastive learning framework that can be adapted to all forms of medical time series data, akin to the classical model SimCLR in the domain of contrastive learning? In contrast to general time series data, medical time series data typically exhibit two additional granularities: patient and trial. Our objective is to craft an innovative framework that utilizes all information within medical time series in a self-supervised manner. This approach enables us to harness the information of patient and trial level to learn consistency across instances. Simultaneously, we leverage the sample and observation levels to facilitate conventional instance discrimination. In COMET, users can easily toggle between different granularities or adjust their weights by setting the hyper-parameter λ, depending on the distinctive features of the medical time series.
Some existing methods like TS2Vec, Clocs, TS-TCC are also multi-granularity. They can be viewed as a particular case of our model. In other words, we are not an extension of their model, but rather, in the domain of medical time series, our approach includes and is compatible with their model. Our work will play a role in summarizing, inspiring, and guiding future works in contrastive learning on medical time series.
***
**Q2**: If the data augmentation methods for all models (including baselines) seem different, then does the experimental design fair?
**A2**: As mentioned in A1, our model is not an extension but rather includes and is compatible with their models within the domain of medical time series. Additionally, existing methods like TFC, TS2Vec, and TimeCLR [10] have their specific data augmentation techniques integrated as part of their methods. Therefore, their original data augmentation methods should remain unchanged when using them as baselines for comparison.
In our original submission, considering the extra contrastive blocks in our method, we did the heavy-duty case study by running extra epochs during contrastive pre-training or implementing extra contrastive blocks for baselines. (Appendix F.4)
***
**Q3**: The two EEG and one ECG datasets seem small in the medical time series domain. The training data size is expected to be 10 times larger than the number of parameters. Use SHHS or the ECG 2020 challenge datasets.
**A3**: Our responses to this question contain three parts.
1) Regarding the PTB dataset, our model comprises 687,552 parameters. While increasing the size of the training data to exceed the number of parameters is indeed an effective approach to counter overfitting, modern techniques such as L2 regularization, dropout, and batch normalization have proven useful in mitigating overfitting risks, enabling models to yield favorable performance even with limited data [11][12].
2) Our datasets are generally not small within the scope of medical time series. The medical time series domain faces challenges in labeling due to factors like the complexity of data collection and associated expenses, including issues such as ethnic considerations [1][2]. Moreover, datasets centered around specific ailments, such as Alzheimer's disease, are rarely accessible and often not publicly available [7].
3) In the original submission, we utilized the PTB dataset, part of the ECG 2020 challenge available at Physionet 2020. Additionally, we newly add a considerably larger dataset PTB-XL (17596 patients, 191400 samples, 5 classes) within the ECG 2020 challenge. See R-table 3. Our method performs better in 11 of 12 tests than the other three SOTAs. Particularly with label fraction 1%, we outperform SOTAs by about 5% F1 score and 3% AUROC, demonstrating the effectiveness of contrastive pre-train of COMET in reducing the dependence of labeled data.
***
**Q4**: The generalizability of the proposed model?
**A4**: Our response has three parts.
1) As indicated in the title and abstract, COMET is uniquely tailored for medical time series, capitalizing on their distinctive characteristics. It is not a universally applicable model for all types of time series, as general time series typically only have sample and observation granularities.
2) We offer an illustrative example involving satellite sensor applications, intending to inspire researchers in diverse domains. The key is to utilize all available information, excluding label data, for contrastive pre-training, such as patient ID. To adapt our approach to other domains, researchers must consider a crucial question: Does the dataset has additional information beyond sample labels? If affirmative, can this information be harnessed for contrastive learning? The example of satellite sensor application underscores the potential existence of supplementary information even in non-medical domains.
3) There are also other multi-granularity papers in different domains. We review and compare them with our methods in Appendix E.2. Some papers focus on learning representations of various granularities, which differs from ours. In our case, the distinct granularities are employed for constructing positive and negative pairs during contrastive learning. | Summary: Medical time-series data, unlike domains such as CV and NLP, lack data labeling but contained more layers of information corresponding to observation, trial, and individual physiologies. Unlike previous methods that overlooked the multi-granularity, the authors proposed COMET to trained the combined contrastive losses from all 4 granularities and presented COMET’s elevated performance on multiple datasets, outperforming many up-to-date contrastive baselines. With ablation studies and disclosed code repo, the authors evaluated the effectiveness of proposed COMET and concluded it as a pioneering framework tailored for medical time series.
Strengths: • Figures are well done and make it much easier to understand what the authors are talking about when they mention the different levels of granularity for medical time series data
• The paper is very well written and easy to understand. The method seems simple but this is likely due to the authors’ explanation of information granularity. The paper follows a clear logical flow.
• Capturing multi-granularity information with contrastive learning is a very good idea backed by performance improvements.
• They backed their proposed COMET with comprehensive experiments on multiple datasets with many up-to-date baselines
• The performance of their method is very high in comparison the methods they consider to be related work.
Weaknesses: • While the generalization is likely to follow from this work - this method is somewhat limited to ECG data. ECG data has many structural patterns in medical time series data that come from the cyclical nature of a cardiac cycle that other medical time series might not. As a result, the methods might actually be limited to pulsatile-type medical time series, which would still be a contribution.
For example On page 2, line 71, the statement that medical time series is "low cost and non-invasive." would not necessarily be correct for data collected from invasive A-line, for example. However, I mostly agree with the paragraph on EEG, ECG, etc., and how they benefit healthcare practices. I would suggest authors better approach this limitation which does not take away from the contribution of the work overall.
• While many works are compared, other ECG-related methods, one of which they mention in the paper, CLOCs have not been compared. How does this method compare to ECG specific methods?
• It was unclear how the time series features were fed into the COMET and how they were broken into different granularity and evaluated by different contrastive losses (see first bulletins of questions).
• It would be better to include ablation study on the process of discovering the optimal set of hyper-coefficients for each dataset.
• On Page 6 line 215, D is defined as dataset, but it should be better defined before referenced, namely in Section 4.2 or earlier?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: • Can authors address the limitations? is this work specific to cyclical time-series data?
• How do the results compare to other ECG contrastive pretraining methods?
• Reading the “comet.py” and “models/encoder.py’s TSEncoder”, it seemed like the same inputs “x” were used to calculate 4 levels of contrastive losses. Then what exactly is “x” (the time series input of the COMET), is it observation-level? Sample-level? Can authors better describe the actual inputs to the system?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: • Only addresses one modality of medical timeseries data
• Does not compare to other ECG contrastive pretraining methods
These have not been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your very thoughtful feedback. We highly appreciate you carefully reading our figure, method, appendix, and code! We are happy you feel our paper follows a clear logic and is easy to follow!! Again, thank you for your comments to help us improve our paper. Here are the response to your questions.
***
**Q1**: Can authors address the limitations? is this work specific to cyclical time-series data? • How do the results compare to other ECG contrastive pretraining methods?
**A1**: Thank you for raising these questions. We add CLOCS and NCL as new baselines for comparison and add one more large-scale ECG dataset PTB-XL (17596 patients, 191400 samples, 5 classes)for the experiment. See R-table 3 for experiment results. Our method performs better in 11 of 12 tests than the other three SOTAs. Particularly with label fraction 1%, we outperform SOTAs by about 5% F1 score and 3% AUROC, demonstrating the effectiveness of contrastive pre-train of COMET in reducing the dependence of labeled data.
***
**Q2**: Reading the “comet.py” and “models/encoder.py’s TSEncoder”, it seemed like the same inputs “x” were used to calculate 4 levels of contrastive losses. Then what exactly is “x” (the time series input of the COMET), is it observation-level? Sample-level? Can authors better describe the actual inputs to the system?
**A2**: Thank you again for carefully reading our code! The input "x" utilized for each contrastive block is all sample-level data, as our primary objective entails acquiring sample-level representations. A batch of sample-level data "x" is passed to the encoder to process and apply data augmentation to generate embeddings for each block. Distinct levels exhibit varying data augmentation approaches or no augmentation. Subsequently, the embeddings from each level, coupled with their respective patient/trial IDs, are fed into the system to compute the contrastive loss. Each level is strategically designed to capitalize on consistency at differing granularities, engendering distinct contrastive loss formulations. For instance, positive and negative pairs are generated through data augmentation at the observation and sample levels, whereas at the patient/trial level, they are formed based on the presence of the same patient/trial ID.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comments. I think these clarifications - and the new baselines - are valuable additions. | Summary: This paper proposes a multi-granularity framework leveraging data consistencies at different levels inherent in medical time series data. The model learns with contrastive loss designed at every data granularity, i.e., observation, sample, trial, and patient levels. The method is evaluated with three binary classification downstream tasks, showing significant gain over SOTA timeseries contrastive learning methods.
Strengths: 1. The idea of leveraging data consistencies are multi-granularity is interesting and seemingly effective.
2. The writing and presentation is easy to follow.
Weaknesses: 1. The contrastive learning for each granularity is simple and straight forward, making the novelty of the paper limited.
2. Majority of the medical tasks are fine-grained in nature. That is to say, the difference between two samples of difference classes may be very subtle and local. It is unclear how the data augmentation method for each granularity is designed to ensure not disturbing the sample label.
3. The evaluation is a bit unpersuasive. The downstream tasks are all simple binary classification problems (e.g. for dementia detection, Myocardial infarction, or Parkinson’s disease).
4. For medical diagnosis tasks, accuracy and F1 shall not be used as the main metric as the data distribution is usually drastically imbalance. I would suggest either focus on AUROC or AUPRC. In this case, COMET shows inferior performance on PTB than TS2vec. Any explanation for this results?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The difference between ‘sample’ and ‘trial’ are a bit unclear. It seems that we can also consider a trail as a sample with a longer time span? How to segment trials into distinct samples? ”Each sample is a one-second interval with 256 observations. ” – Any justification for choosing one-second interval here?
2. I am curious about the implementation details for Observation-level data consistency. How $t^-$ is chosen? Shall it be close to $t$ or distant from $t$? How to guarantee that $x_{i, t^-}$ is more different to $x_{i, t}$ than $\tilde{x}_{i, t}$, as the time series data often display repetitive periodicity?
3. When jittering and masking are used for data augmentation, would it broken the continuity of the normal time series data?
4. The decision of using InfoNCE at observation and sample blocks while using NT-Xent loss at trial and patient level seems to be very arbitrary. Any explanation for it?
5. SOTA comparison: why NCL and CLOCS are not considered?
6. Any ambulation study on the hyperparameters $\lambda_i$ is missing.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: the limitations are presented in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are happy you feel our paper is easy to follow, interesting, and effective. We briefly respond to your concerns point-by-point. If you do not feel we have sufficiently justified a higher score, please let us know where we can further improve our work. Thank you again!
***
**Q1** The contrastive learning for each granularity is simple and straightforward, limiting the novelty.
**A1** We agree that contrastive learning of each granularity is straightforward and not overcomplicated. However, we sincerely believe that proposing a simple solution to address a challenging problem should exhibit the strengths of our work. In this paper, we posited the granularity conceptions in medical time series and designed a simple-yet-effective framework that uses all possible granularities in a self-supervised manner. Our COMET covers and is compatible with existing SOTAs on different granularities, which will play a significant role in summarizing, inspiring, and guiding future works in contrastive learning of medical time series.
***
**Q2** The difference between two samples of different classes may be subtle and local. How does the data augmentation method for each granularity not disturb the sample label?
**A2** Our answer contains two parts.
1) We did not apply data augmentation to patient and trial levels. Instead, we employed patient and trial IDs to establish positive and negative pairs for learning consistency across instances (Lines 167-176).
2) We used timestamp masking [6] for data augmentation at sample and observation levels. To prevent useless masking on zero-value regions, we applied masking to the projected embedding rather than the raw data (Lines 278-283). This method also mitigates the risk of masking affecting sample labels in medical tasks, as the embeddings of two samples in different classes could exhibit substantial differences after projection, even if their raw data appears similar.
***
**Q3** The evaluation is a bit unpersuasive. The downstream tasks are all simple binary classification problems.
**A3** Our answer contains three parts.
1) In the initial submission, we evaluated COMET on diverse downstream tasks besides binary classification, including clustering and anomaly detection (Appendix F.3).
2) We mainly presented the results of binary problems because the majority of real-world applications of medical time series for disease diagnosis are binary classification tasks [7][8].
3) We newly added a large-scale dataset PTB-XL (17596 patients, 191400 samples, 5 classes) to prove our model works well on multi-classification problems. See R-table 3. Our method performs better in 11 of 12 tests than the other three SOTAs. Particularly with label fraction 1%, we outperform SOTAs by about 5% F1 score and 3% AUROC, demonstrating the effectiveness of contrastive pre-train of COMET in reducing the dependence of labeled data.
***
**Q4** 1) Suggest focusing on AUROC or AUPRC on imbalanced datasets. 2) COMET shows inferior performance on PTB than TS2vec.
**A4**
1) We agree that AUROC and AUPRC are more appropriate for medical tasks. We presented six metrics (including AUROC and AUPRC) to provide comprehensive results to readers so they can analyze the results with their preferences. We will emphasize more about AUROC in the camera-ready version.
2) We did a careful ablation study on each block as you suggested and found increasing the lambda weight on the observation block benefits the performance (See R-table 2). We think the periodicity of ECG signals causes the reason. After adding more weight to the observation block, we showed better results on AUROC(93.67±2.34) compared with TS2Vec.
***
**Q5** 1)The difference between ‘sample’ and ‘trial’ is unclear. Is the trial a sample with a longer time span? 2) How to segment trials into distinct samples? 3) Any justification for choosing a 1-second interval?
**A5** Our answer contains three parts.
1) A trial can indeed be considered a sufficiently long sample. In the original submission, we have provided a detailed example in Appendix A to elucidate the distinctions between trial and sample. A trial is usually a much longer sample. It can span hours or even days, containing hundreds of thousands of timestamps, which is impractical for models to train on entire trials effectively. Consequently, researchers commonly partition them into shorter samples for training purposes [1][2][3].
2) Signal segmentation has been a standard operation in signal processing for decades. Our approach to segmenting each dataset is detailed in Appendix C.
3) The window size can be regarded as a hyper-parameter. Empirically, this selection ranges from 1 to 10 seconds, contingent on factors such as sampling frequency, data types, etc.[2][3]. We opt for 1 second simply due to its superior performance.
***
**Q6** Implementation details for Observation-level data consistency. How $t^-$ is chosen? How to guarantee that $x_{i,t^-}$ is more different to $x_{i,t}$ than $\widetilde{x}_{i,t}$, as time series data often display repetitive periodicity?
**A6** Our answer contains three parts.
1) Only observations augmented from the same raw observation are treated as positive pairs, while all observations with distinct timestamps within a sample are designated as $t^-$ (Fig. 2; Sec. 4.2). The principle to select negative observations ($x_{i,t^-}$) aligns with the majority of contrastive learning studies for negative samples [6].
2) As we employed a projection layer to map raw data into embeddings before applying data augmentation, such mapping disrupts the periodicity of the raw data.
3) Similar to sample-level contrasting, there is no assurance that all negative samples are exclusively hard-negatives (true negatives). It is quite common for certain negative samples to share the same label as the anchor sample [9]. In other words, there is no certainty, nor necessary, that all negative observations are dissimilar to the anchor observation.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed response as well as the added experiments, which have answered part of my concerns. However, first, I do not agree with authors that "the majority of real-world applications of medical time series for disease diagnosis are binary classification tasks". In fact, real-world medical applications must be able to handle multi-class classification. Secondly, I would suggest the authors to refine their notations. In particular, it is quite confusing to use the same notation x to indicate both observation, sample, and trial. Thirdly, both the sample-level consistency and the trail-level consistency are comparing at sample level. The difference lies in how the triplets are generated. Why the former use InfoNCE loss while the later use NT-Xent loss? The decision seems to be quite ad hoc. Given the above, I have raised my rating to 4.
---
Reply to Comment 1.1.1:
Title: Further explanations for your three comments
Comment: We thank the reviewer for the active feedback and give us the opportunity to further explain to your concerns.
**New Q1**: I do not agree with authors that "the majority of real-world applications … are binary classification tasks". In fact, real-world medical applications must be able to handle multi-class classification.
**New A1**:
1. We highly agree with the reviewer that real-world applications should be able to handle multi-class classification. Although the current dominant trend of AI research for medical time series centers around simple binary tasks [1-4], yet complexity of medical data and the diversity of clinical applications call for more comprehensive approaches.
2. In response to your concern, we have demonstrated the efficacy of our model in addressing multi-classification challenges. We have conducted an additional experiment over a large-scale ECG dataset (PTB-XL, 17596 patients, 191400 samples) with 5 classes (See A4 and R-table 2).
3. We acknowledge it's helpful to have more studies and analyses in terms of multi-class tasks. Will discuss it in our limitations of this paper, and list it as an important future work.
[1] Liang, H., et al., 2019. Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence. Nature medicine,.
[2] Hicks, S.A., et al., 2022. On evaluation metrics for medical applications of artificial intelligence. Scientific reports.
[3] Buch, V.H., et al., 2018. Artificial intelligence in medicine: current trends and future possibilities. British Journal of General Practice.
[4] Soenksen, L.R., et al. 2022. Integrated multimodal artificial intelligence framework for healthcare applications. NPJ digital medicine.
**New Q2**: Refine their notations. In particular, it is quite confusing to use the same notation x to indicate both observation, sample, and trial.
**New A2**:
We appreciate your attention to the notations used in our work.
First, we clarify we didn’t use the same notation to indicate all of observation, sample, and trial. In this paper, our $x$ consistently represents a sample, while $r$ and $p$ stand for trial and patient, respectively.
- Within the sample level, $x_i$ denotes a sample with index $i$, and in the observation level, $x_{i, t}$ signifies an observation within sample $x_i$ at timestamp $t$. The subscripts $_i$ and $_t$ denote the $i$-th sample and the $t$-th observation.
- However, at the patient and trial levels, the subscript $i$ is unnecessary as the sample index doesn't matter here because we mainly care about how the positive/negative sample pairs are formulated. Here, the focus shifts to whether the sample $x$ originates from the same patient or trial. So, to simplify the notations, we omitted the subscript $_i$ in patient and trial levels, Instead, we use superscript $^+$ and $^-$ to denote the positive and negative samples, respectively, to emphasize the sample pair construction is more important than sample index.
Second, we struggled in designing our paper's notation for a long time. We explored various plans, such as using different subscripts and superscripts to represent different levels (e.g., let $x_{s,o}^{p,t}$ denote the $o$-th observation from the $s$-th sample of the $t$-th trial of the $p$-th patient). However, we found such notations are overly complex and difficult to comprehend. Consequently, we opted for a simpler notation scheme (i.e., the one we presented in the paper) to make the whole model easier to follow.
Your attention to our notation design is greatly valued, and we truly welcome any suggestions and advice to help us further polish the notations. We remain committed to refining our notations to ensure they are accessible and intuitive.
**New Q3**: ....Why the former use InfoNCE loss while the later use NT-Xent loss? ...
**New A3**: This is the Q8 of your original comments, which we have answered in rebuttal. Due to the space limitation in response, we have presented the answers to your Q7-Q10 in the global response (prior to the references). For your convenience, we paste our answer here:
- We observed that TS2Vec performs effectively for observation and sample levels. Concurrently, CLOCS introduced contrastive learning on ECG data, utilizing patient-level information, which bears similarity to our implementation at the trial level. Thus, we adopted their loss functions as a foundation for designing our own loss functions across distinct granularities.
- Moreover, we clearly understand InfoNCE and NT-Xent are all derived from NCE loss family. We conducted preliminary experiments to compare using InfoNCE at four granularities, using NT-Xent in all granularities, and exchangeable use InfoNCE or NT-Xent at different granularities. The experimental results show InfoNCE and NT-Xent will lead to very similar outcomes while the NT-Xent can converge a bit faster. We will discuss this in our limitations and future work. | Rebuttal 1:
Rebuttal: We appreciate the valuable feedback provided by all the reviewers. We highly appreciate the reviewers who believe our work is solid, effective, easy to follow, and well-written, with a logical presentation of the methodological contribution of the proposed method. In response to the reviewers' insightful comments, we have conducted additional experiments, including two new ablation studies, incorporating two new baselines, and utilizing a new dataset. We addressed your concerns and suggestions in the individual responses. We will incorporate the newly added content into the final version of the paper for camera-ready submission.
Thank you again for your thoughtful comments. We work hard to refine our paper, and we sincerely hope our responses are informative and helpful. If you feel we have not sufficiently addressed your concerns to motivate increasing your score, we would love to hear from you further on what points of concern remain and how we to improve our work. Thank you again!
Due to space limitations, we have included **some responses, all the tables, and all the references** within this global rebuttal. The experiment results of the ablation study in R-tables 1 and 2 can be found in the attached PDF file. Here is R-table 3.
***
**R-table 3: Comparison of the new large-scale dataset PTB-XL with new baseline CLOCS, NCL, and best old baseline TS2Vec.**
| Label Fraction | Method | Accuracy | F1 | AUROC | AUPRC |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
| 100% | CLOCS | 72.88±0.35 | 59.97±0.66| 88.30±0.17| 62.79±0.20|
| 100% | NCL | 72.63±0.09 | 61.16±0.39 | 89.28±0.00 | 65.17±0.45 |
| 100% | TS2Vec | 72.48±0.39 | 60.50±0.20 | 89.18±0.17 | 64.84±0.58 |
| 100% | COMET(Ours) | **74.18±0.02** | **61.63±0.06** | **89.29±0.11** | **66.12±0.52** |
| 10% | CLOCS | 67.00±0.82 | 54.81±0.36 | 83.66±0.45| 55.97±0.63|
| 10% | NCL | 68.12±0.04 | 56.62±0.39 | 86.02±0.12 | 60.17±0.47 |
| 10% | TS2Vec | 69.21±0.48 | 57.29±1.23 | **86.55±0.09** | 61.12±0.41 |
| 10% | COMET(Ours) | **70.69±0.19** | **57.80±0.23** | 86.37±0.25 | **61.78±0.10** |
| 1% | CLOCS | 52.16±2.35 | 36.39±2.43 | 69.68±1.85 | 35.48±2.13 |
| 1% | NCL | 49.92±1.13 | 31.96±0.64 | 68.99±0.36 | 33.75±0.73 |
| 1% | TS2Vec | 54.76±0.69 | 38.93±1.66 | 74.47±0.35 | 39.60±0.09 |
| 1% | COMET(Ours) | **60.38±0.21** | **43.67±0.65** | **77.59±0.75** | **45.75±0.93** |
***
## Here are some continued responses to reviewer **Trt3**'s questions.
***
**Q7** Would jittering and masking break the continuity of the normal time series data?
**A7** We respectively clarify that we did not use jittering ( Line 196 just serves as an example). Our experiments solely used masking(Sec. 5 implementations; Appendix B data augmentation bank). As previously mentioned in A2, we carefully designed the strength of data augmentation (10% masking ratio) to maximize the preservation of features within the time series, which will not distort the data continuity.
***
**Q8** Explain the usage of InfoNCE at observation and sample blocks while using NT-Xent loss at trial and patient levels.
**A8** We observed that TS2Vec performs effectively for observation and sample levels. Concurrently, CLOCS introduced contrastive learning on ECG data, utilizing patient-level information, which bears similarity to our implementation at the trial level. Thus, we adopted their loss functions as a foundation for designing our own loss functions across distinct granularities. Broadly speaking, these loss functions are all variants of the InfoNCE loss function.
***
**Q9** SOTA comparison: why NCL and CLOCS are not considered?
**A9** We did test running for CLOCS before submission and found it extremely time-consuming (More than 10 hrs for the PTB dataset in a single run). As a result, we chose some other SOTAs as baselines. Regarding NCL, their definition of neighborhood contrasting inspired us to design trial level contrasting. We compare with CLOCS and NCL on the newly added large-scale dataset PTB-XL. See R-table 3.
***
**Q10** Any ablation study on the hyperparameters $\lambda_i$ is missing.
**A10** We add ablation study on $\lambda_i$. See R-table 1 and 2.
***
# Reference
[1] Kiyasseh, Dani., et al. "Clocs Contrastive learning of cardiac signals across space, time, and patients." ICML, 2021.
[2] Lan, Xiang., et al. "Intra-inter subject self-supervised learning for multivariate cardiac signals." AAAI, 2022.
[3] Banville, Hubert., et al. "Uncovering the structure of clinical EEG signals with self-supervised learning." Journal of Neural Engineering, 2021
[4] Tonekaboni., et al. "Unsupervised representation learning for time series with temporal neighborhood coding." arXiv 2021
[5] Ieracitano., et al. "A convolutional neural network approach for classification of dementia stages based on 2D-spectral representation of EEG recordings." Neurocomputing, 2019
[6] Yue, Zhihan., et al. "Ts2vec Towards universal representation of time series." AAAI, 2022.
[7] Tzimourta, Katerina D., et al. "Machine learning algorithms and statistical approaches for Alzheimer’s disease analysis based on resting-state EEG recordings A systematic review." Inter. J. of Neural Syst, 2021
[8] Zhang, Xiang, and Lina Yao. Deep Learning for EEG-Based Brain–Computer Interfaces Representations, Algorithms and Applications. 2021.
[9] Kalantidis, Yannis., et al. "Hard negative mixing for contrastive learning." Neurips, 2020
[10] Xinyu Yang., et al. “A self-supervised contrastive learning framework for univariate time series representation.” Knowledge-Based Systems, 2022.
[11] Machine Learning Mastery. "Introduction to Regularization to Reduce Overfitting and Improve Generalization Error." https://machinelearningmastery.com/introduction-to-regularization-to-reduce-overfitting-and-improve-generalization-error/
[12] TensorFlow. "Overfit and Underfit." https://www.tensorflow.org/tutorials/keras/overfit_and_underfit
Pdf: /pdf/03dbb96c62aaaea35ad8e0d0b5115238a61d4f70.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Projection-Free Online Convex Optimization via Efficient Newton Iterations | Accept (poster) | Summary: Building upon recent progress in projection-free OCO, this paper presents a new IPM-like algorithm attaining optimal regret with improved efficiency, which calculates Hessians only $O(\sqrt{T})$ number of times. The algorithm assumes access to a self-concordant function by gradient oracles and Hessian oracles.
Strengths: This paper makes a solid contribution on more efficient projection-free OCO, which admits optimal regret and better efficiency at the same time. Moreover, it also opens a new avenue of using IPM-like methods for projection-free OCO, besides classic LOO-based methods and SO/MO-based approaches developed recently.
Weaknesses: There is an author's comment in line 160, which I suspect violates the double-blind rule. Though the contents are interesting, I regret to reject the paper for this reason.
I think the paper can benefit from a detailed comparison with the ONS algorithm in HAK07.
There are many typos, here are some that I caught:
Line 12: expansive - expensive (it appears multiple places)
Line 39: insure - ensure
Line 129: interior - interior of
Line 13 in the alg box: there should be another line of Set $H_{t+1}=H_t$?
---Update---
After a discussion with the AC, the comment is considered as a typo and I changed the score accordingly.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Can you elaborate more on how to obtain Corollary 1?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: There is an author's comment in line 160, which I suspect violates the double-blind rule.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review.
**“There is an author's comment in line 160, which I suspect violates the double-blind rule...”**
Unfortunately, this was a typo that we only spotted shortly after the submission deadline.
**“I think the paper can benefit from a detailed comparison with the ONS algorithm in HAK07.”**
We note that the ONS algorithm in [Hazan et al. 2007] is an algorithm for online exp-concave optimization and is not intended for use in the OCO setting we consider in this paper; naively applying ONS to OCO would lead to suboptimal (even linear) regret. This is because ONS uses certain quadratic terms (designed for the exp-concave setting) in the regularizer that can lead to a linear regret in the OCO setting. Furthermore, the ONS algorithm requires generalized projections at each iteration which is precisely what we are trying to avoid as these can be computationally costly; see discussion in [Garber and Kretzu 2023].
**“There are many typos, here are some that I caught”**
The typos will be fixed in the revision. And you are correct, there should be $H_{t+1}=H_t$ on line 13 of the algo (we have already spotted this and fixed it).
**“Can you elaborate more on how to obtain Corollary 1?”**
When using the LS barrier, results in [Lee and Sidford 2019] imply that we have
- $\mathcal{C}^{\texttt{grad}}_{\varepsilon}\leq \widetilde{O}(\mathcal{C}^{\texttt{sys}} \cdot \log (\varepsilon^{-1}))$; and
- $\mathcal{C}^{\texttt{hess}}_{\varepsilon} = \widetilde{O}(\mathcal{C}^{\texttt{sys}} \sqrt{d} \cdot \log (\varepsilon^{-1}))$,
where $\mathcal{C}^{{\texttt{sys}}}$ is the computational cost of solving a linear system of the form $A^\top \text{diag}(v) A x = y$, for vectors $v\in \mathbb{R}^{d}_{\geq 0}$ and $y\in \mathbb{R}^d$; here $A$ represents the constraint matrix of the polytope. In the worst-case, the linear-system solve cost is $O(m d^{\omega -1})$ (i.e. $\mathcal{C}^{{\texttt{sys}}}\leq O(m d^{\omega -1})$), where $\omega$ is the exponent of matrix multiplication. Plugging these bounds into the computational complexity in Theorem 6 should imply the claimed complexity in Corollary 1. We will add these details in the revision.
**References:**
Elad Hazan, Amit Agarwal, and Satyen Kale. "Logarithmic regret algorithms for online convex optimization." Machine Learning 69 (2007): 169-192.
Yin Tat Lee, and Aaron Sidford. "Solving linear programs with sqrt (rank) linear system solves." arXiv preprint arXiv:1910.08033 (2019).
Dan Garber and Ben Kretzu. "Projection-free Online Exp-concave Optimization." arXiv preprint arXiv:2302.04859 (2023).
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: Thank you for your reply.
By comparison with ONS I meant what you wrote here, and I believe such discussion can make the contributions of this work more clear.
As for Corollary 1, I still don't get where the $d^{\frac{7}{2}}$ terms comes from. Plugging these bounds into Theorem 6, the computation seems to be $\tilde{O}(md^{w-1}T+d^2T+md^w\sqrt{T})$.
---
Reply to Comment 1.1.1:
Title: Clarifying the complexity in Corollary 1
Comment: That is indeed the correct complexity, which may further be simplified to $\widetilde{O}(m d^{\omega -1} T + m d^{\omega} \sqrt{T})$, if we assume that $d^2 \leq m d^{\omega-1}$; $m\geq d$ is the interesting setting since otherwise one can just use the standard log-barrier instead of the LS-barrier. Corollary 1 displays $d^{7/2} \sqrt{T}$ instead of $m d^{\omega} \sqrt{T}$, for the lower-order term. I believe this was just a typo. | Summary: This paper introduces new projection-free algorithms for online convex optimization, which utilize Newton iterates with a self-concordant barrier for the target set. The authors establish a state-of-the-art regret bound for this algorithm.
Strengths: Strengths And Weaknesses:
The paper presents a new approach for projection-free online convex optimization. The main advantage of the proposed algorithm is that it only requires computing a full inverse of the Hessian in a vanishing O(1/\sqrt{T}) fraction of each round. Additionally, for the case of a polytope with m constraints, their method exhibits lower per-iteration computational cost compared to linear optimization. Furthermore, their method achieves a better regret bound than existing works in this specific scenario.
However, I have some concerns that I would like the authors to address:
It would be beneficial to have a comparison of the gradient complexity between this method and other related works. Additionally, it would be valuable to provide further analysis of the computational complexity mentioned in line 208. Specifically, it would be helpful to clarify under which circumstances the gradient complexity dominates the Hessian complexity, and vice versa.
Regarding the computation complexity mentioned in line 208, there is a coefficient M_{\Phi}. In some cases, M_{\Phi} can be significantly larger than d. For instance, when considering a polytope with m constraints, M_{\Phi} is related to m, and m can be exponential in d.
In Theorem 6, the authors assume that the local norm of g_t is less than C for all w_t. This assumption is not trivial, as the Hessian of a self-concordant function is typically unbounded.
Weaknesses: See above.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
**“It would be beneficial to have a comparison of the gradient complexity between this method and other related works.”**
In previous works, the gradient complexity refers to the number of gradient computations of the losses. Our algorithm only requires a single gradient computation of the loss at each step, which is on par with previous approaches. We suspect that you are referring to $\mathcal{C}^{\texttt{grad}}$. We answer this next.
**“Additionally, it would be valuable to provide further analysis of the computational complexity mentioned in line 208. Specifically, it would be helpful to clarify under which circumstances the gradient complexity dominates the Hessian complexity, and vice versa.…”**
The Hessian cost $\mathcal{C}^{\texttt{hess}}$ will typically dominate the gradient cost $\mathcal{C}^{\texttt{grad}}$ for the barriers of essentially all common convex sets used in practice. To get a sense of this, let’s look at the case of a polytope in $\mathbb{R}^d$ with $m$ constraints (these details will be added in the revision). When using a standard log-barrier $\Phi$, we have $M_{\Phi}=1$, $\mathcal{C}^{\texttt{grad}}\leq O(md)$, and $\mathcal{C}^{\texttt{hess}} = O(m d^{\omega -1})$ (these are the costs of computing the gradients and Hessians exactly), where $\omega$ is the exponent of matrix multiplication. Now, if we use the LS barrier as in Section 4 (which confers benefits in terms of the regret), results in [Lee and Sidford 2019] imply that we have
- $M_{\Phi}=O(\log(m)^{2/5})$;
- $\mathcal{C}^{\texttt{grad}}_{\varepsilon}\leq \widetilde{O}(\mathcal{C}^{\texttt{sys}} \cdot \log(\varepsilon^{-1}))$; and
- $\mathcal{C}^{\texttt{hess}}_{\varepsilon} = \widetilde{O}(\mathcal{C}^{\texttt{sys}} \sqrt{d} \cdot \log(\varepsilon^{-1}))$,
where $\mathcal{C}^{{\texttt{sys}}}$ is the computational cost of solving a linear system of the form $A^\top \text{diag}(v) A x = y$, for vectors $v\in \mathbb{R}^{d}_{\geq 0}$ and $y\in \mathbb{R}^d$; here $A$ represents the constraint matrix of the polytope. So for both the LS- and log-barrier, the cost of approximate Hessian evalution dominates that of (approximate) gradient evaluation. We will add details on this in the revision.
**“Regarding the computation complexity mentioned in line 208, there is a coefficient M_{\Phi}...”**
As we mentioned in the previous paragraph (see also display after Line 229), the constant $M_\Phi$ for the LS barrier [resp. Log-barrier] is such that $M_{\Phi}=O(\log(m)^{2/5})$ [resp. $M_{\Phi}=1$]. And so for the application to polytopes, $M_{\Phi}$ is at most logarithmic in $m$ (and so at most polynomial in $d$ when $m$ is exponential in $d$). Note that $M_{\Phi}$ is not to be confused with the parameter $\nu$ of the self-concordant barrier (see Definition 2), which is the one that may scale polynomially in $m$ depending on the choice of the barrier. In fact, for the log-barrier, we have $\nu =m$, and for the LS barrier we have $\nu =d$; having $\nu$ being independent of $m$ is precisely the appeal of using the LS barrier.
**“In Theorem 6, the authors assume that the local norm of g_t is less than C for all w_t. This assumption is not trivial, as the Hessian of a self-concordant function is typically unbounded.”**
Note that the assumption we make involves the inverse of the Hessian, i.e. we assume $\Vert g_t\rVert_{\nabla^{-2}\Phi(w_t)}$ is bounded.
And so large Hessian eigenvalues only make things better here. In general, you should think of this assumption as being much weaker than the standard Lipschitzness assumption in OCO. In fact, there are many settings where the losses are non-lipshitz, i.e. where $\Vert g_t\rVert$ is unbounded, but the local norm $\Vert g_t\rVert_{\nabla^{-2}\Phi(w_t)}$ is. This includes, for example, the setting of linear regression with the log-loss (see e.g. Section 6 and Lemma 7 in [Rakhlin and Sridharan 2015]), and the classical portfolio selection setting (see e.g. [Luo et al. 2018]). There are other settings where the local gradient norms can be much smaller that their standard Euclidean counterparts; see [Abernethy et al. 2012] for a discussion.
**References:**
Jacob D. Abernethy, Elad Hazan, and Alexander Rakhlin. "Interior-point methods for full-information and bandit online learning." IEEE Transactions on Information Theory 58, no. 7 (2012): 4164-4175.
Alexander Rakhlin, and Karthik Sridharan. "Sequential probability assignment with binary alphabets and large classes of experts." arXiv preprint arXiv:1501.07340 (2015).
Haipeng Luo, Chen-Yu Wei, and Kai Zheng. "Efficient online portfolio with logarithmic regret." Advances in neural information processing systems 31 (2018).
Yin Tat Lee and Aaron Sidford. "Solving linear programs with sqrt (rank) linear system solves." arXiv preprint arXiv:1910.08033 (2019).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My concerns has been answered therefore I raised my score. | Summary: This paper proposes new projection-free algorithms for online convex optimization over a convex domaim $\mathcal{K}$.
Specifically, this paper proposes efficient Newton iterations to obtain projection-free online convex optimization.
Strengths: This paper proposes an efficient online Newton method which is computation efficient.
Weaknesses: This paper seems only using existing method to the online Newton. Though the author claims that unlike the setting of [26], where it is possible to add quadratic terms to the barrier for additional stability, in our setting, we cannot do that wihout sacrificing performance in terms of reget, we could not find any theoretical hardness in this paper to overcome this problem.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: no
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: no
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
**“This paper seems only using existing method to the online Newton.”**
Our algorithm does apply novel techniques to achieve the desired regret guarantee in the general OCO setting. In fact, applying existing techniques such as those in [Mhammedi and Gatmiry 2023] fails for two reasons:
The first reason, as you already mentioned, is that we cannot add quadratic terms as in [Mhammedi and Gatmiry 2023] since this would lead to a suboptimal regret in the OCO setting. These terms confer stability to the iterates of their algorithm, and they are able to add them without sacrificing performance because they consider the online exp-concave optimization setting. One novelty of our algorithm is in the choice of regularizer, which ensures stability without any quadratic terms.
Stability aside, the other issue is that their algorithm performs Taylor expansions to approximate Hessian matrices, which are required to compute the Newton iterates. Note that these Taylor expansions are only computationally efficient when the feasible set is a Euclidean ball, and so this does not apply to our OCO setting with general convex sets. To overcome this issue, we replace Taylor expansions by multiple Newton steps per round (see Lines 6-9 of Algorithm 1), and we show (via a non-trivial analysis) that doing this essentially leads to the same guarantee as one would get using Taylor expansions. As far as we know, this approach of taking multiple Newton steps per round in novel.
**References:**
Zakaria Mhammedi and Khashayar Gatmiry. "Quasi-newton steps for efficient online exp-concave optimization." In The Thirty Sixth Annual Conference on Learning Theory, pp. 4473-4503. PMLR, 2023.
---
Rebuttal Comment 1.1:
Comment: I believe that by introducing the self-concordant barrier, the techinique of [Mhammedi and Gatmiry 2023] can be easily adapted to this paper.
---
Reply to Comment 1.1.1:
Title: Reply to kmvY
Comment: We respectfully disagree. As explained in the rebuttal, the Taylor expansion used in [Mhammedi and Gatmiry 2023] cannot be used with sets other than a Euclidean ball while ensuring computational efficiency (this is not a matter of introducing a self-concordant barrier). Using multiple Newton iterates instead of a Taylor expansion is a non-trivial solution to the problem. | Summary: This paper proposes a "projection-free" algorithm for online convex optimization. The proposed algorithm adopts a self-concordant barrier of the constraint set as the regularizer, automatically ensuring the feasibility of the actions. The proposed algorithm only requires computing the inverse of an approximate Hessian at some rounds, instead of all rounds, achieving a small overall complexity when, e.g., the constraint set is a polytope.
---
The authors have addressed my questions. I keep the original rating.
Strengths: 1. The algorithm is new and the analysis is non-trivial.
2. The algorithm achieves the currently best regret performance for online convex optimization on a polytope.
Weaknesses: 1. **Computation of approximate gradient and Hessian.** I wonder how reasonable the error tolerances in the definitions of the approximate gradient and Hessian are, regarding that the Hessian inverse can have arbitrarily small eigenvalues in general. Can the approximate gradient and Hessian easily computed in general?
2. **Explanations about complexity claims.** The claim that the per-round complexity of the algorithm is cheaper than that of the linear optimization oracle and the complexity bound in Corollary 1 need some explanations.
3. **Inconsistent names.** It is claimed that the Lewis-Weights barrier will be used in Section 1 and the title of Section 4, but Ln. 226 says the barrier is called the Lee-Sidford barrier.
4. **Presentation of Algorithm 1.** In Ln. 12--14, $H_{t + 1}$ is not specified.
5. **Typos.**
- Ln. 160: "[zm:cite]"
- Ln. 188: an -> and
- Ln. 189: turning -> tuning
- Ln. 190: "additional assumptions on the sequence of losses.": incomplete sentence.
- Ln. 199: if -> of
- Ln. 232: $\tilde{O} (1)$ is not an appropriate use of the notation.
- Ln. 299: "in the sense of (3) *and (3)*"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This is a theory paper. The assumptions are explicitly stated. Other possible issues I have noticed have been pointed out in the weaknesses block.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for you positive review and helpful suggestions.
**"I wonder how reasonable the error tolerances in the definitions of the approximate gradient and Hessian are…"**
To get a sense of this, let’s look at the case of a polytope in $\mathbb{R}^d$ with $m$ constraints. Here, we want to use the LS barrier (see Section 4) as it gives a regret bound that is independent of the number of constraints $m$ (this would not be the case with the standard log-barrier). When using the LS barrier, results in [Lee and Sidford 2019] imply that we have
- $\mathcal{C}^{\texttt{grad}}_{\varepsilon}\leq \widetilde{O}(\mathcal{C}^{\texttt{sys}} \cdot \log (\varepsilon^{-1}))$, and
- $\mathcal{C}^{\texttt{hess}}_{\varepsilon} = \widetilde{O}(\mathcal{C}^{\texttt{sys}} \sqrt{d} \cdot \log (\varepsilon^{-1}))$,
where $\mathcal{C}^{\texttt{sys}}$ is the computational cost of solving a linear system of the form $A^\top \text{diag}(v) A x = y$, for vectors $v\in \mathbb{R}^{d}_{\geq 0}$ and $y\in \mathbb{R}^d$; here $A$ represents the constraint matrix of the polytope. So when using the LS barrier, the computational efficiency is directly related to how efficiently the linear system can be solved. In the worst-case, the linear-system solve cost is $O(m d^{\omega -1})$, where $\omega$ is the exponent of matrix multiplication, but in many practical cases, the cost is much smaller.
We note that if a linear optimization oracle (which is typically used by other projection-free algorithms) is implemented via an interior point method, then the state-of-the-art approach would require $\sqrt{d}$ linear-system solves of the type described in the previous paragraph (see [Lee and Sidford 2019]). This means that our algorithm can be a factor $\sqrt{d}$ faster than linear-optimization-based projection-free algorithms (because our algorithm only requires $\widetilde{O}(1)$ such linear-system solves per round). We will add details on this in the revision (together with clarifications for the complexity in Corollary 1).
**“The claim that the per-round complexity of the algorithm is cheaper than that of the linear optimization oracle”**
Please see previous paragraph (we will add details in the revision).
**“Inconsistent names” + “resentation of Algorithm 1. In Ln. 12--14, $H_{t+1}$ is not specified.”**
These are typos that we will fix. Thank you for pointing them out.
**References:**
Yin Tat Lee and Aaron Sidford. "Solving linear programs with sqrt (rank) linear system solves." arXiv preprint arXiv:1910.08033 (2019).
---
Rebuttal Comment 1.1:
Title: Keeping rating
Comment: Thanks for the explanation. Please add these details.
---
Reply to Comment 1.1.1:
Title: Acknowledgment
Comment: Thanks. We will make sure to add these details. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Error Bounds for Learning with Vector-Valued Random Features | Accept (spotlight) | Summary: This paper presents error analysis for learning with vector-valued random features, where the output takes vector values. Specifically, the paper considers random feature ridge regression and derive a general bound for the population risk functional, based on which the paper gives several results such as convergence rates for well-specified and mis-specified problems, statistical consistency as well as almost sure bounds. The main results show that $O(\sqrt{N})$ random features are sufficient to derive error bounds of the order $O(1/\sqrt{N})$, where $N$ is the number of training examples. This extends the existing results from real-valued output to vector-valued outputs.
Strengths: The paper provides comprehensive analysis and consider approximation, generalization, misspecification and noisy observations. Furthermore, the paper implies minimax optimal convergence rates in the well-specified setting.
The paper is very well written, and the analysis seems to be rigorous.
The technique does not rely on the explicit RF-RR solution formula, and avoids the use of matrix concentration inequalities.
Weaknesses: While the technique does not use matrix concentration inequalities, the paper only considers random feature ridge regression with square loss. Therefore, the problem considered in the paper is a bit limited. It is not quite clear to me whether the technique developed here can be extended to learning with other loss functions, e.g., logistic loss. Can we get similar error bounds for vector-valued learning with random features and logistic loss?
In Theorem 3.12, the error bounds are of the order $\lambda^{r\land 1}$. Therefore, the error bounds would improve if we have very nice regularity with $r>1$. In other words, the results suffer from a saturation phenomenon. It is not clear to me whether the technique developed here can overcome this saturation phenomenon.
The obtained bounds do not show an explicit dependency on the output dimension $p$. It is not clear to me how the output dimension would affect the performance of learning with vector-valued outputs.
Typo:
Above Eq (3.5): "optimal the sense" should be "optimal in the sense"
Above Eq (3.5): "a short calculation deliver" should be "a short calculation delivers"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can the analysis here be extended to learning with logistic loss functions. In this case, we also do not have explicit formula for the solution?
Can we derive improved results if $r>1$ to overcome the saturation phenomenon?
How would the output dimension affect the convergence rates of the learning with vector-valued outputs.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I do not see concerns on negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We start by thanking the reviewer for your appreciation of the merits of our paper and your welcome suggestions to improve it. We will correct the identified typos in the revision. Below, we address the concerns raised by the reviewer and thank the reviewer in advance for their patience in reading our detailed reply.
1. Closely related analysis can indeed be used to derive bounds for other settings, in particular this has been done for Lipschitz continuous loss functions; relevant results have e.g. been obtained in [SGT18]. However, the available rates for a general Lipschitz loss are significantly worse than the rates we have achieved through our refined analysis in the quadratic (i.e., squared $L^2$ loss) setting. It is unclear to what extent our improved rates may be specific to the quadratic setting. One might conjecture that similarly improved rates (as compared to the general Lipschitz case), could at least be possible when replacing the quadratic loss by e.g. a quartic, or more general $L^q$-loss, instead. We plan to extend our analysis in this direction, in the future.
2. We thank the reviewer for their comments regarding the saturation phenomenon. We believe that this saturation is inherent to the choice of regularizer in ridge regression, and not a mere artifact of our analysis. We would like to expand on the reason for this intuition in more detail: if we neglect errors due to a finite number of data pairs and random features (i.e. formally take the limit $N,M\to \infty$), we arrive at the ridge regression problem (B.6) on the RKHS with $\vartheta := \lambda$. This problem is explicitly solvable, with solution ${\mathcal{G}\_{\vartheta}}$ given in (B.4). Following the argument of Lemma B.3, the corresponding $L_{\nu}^2$ squared approximation error $\Vert \mathcal{G} - \mathcal{G}_{\vartheta}\Vert^2$ is generally of order $\vartheta \sum_j \lambda_j^r/(\lambda_j + \vartheta) \ge \vartheta \lambda_1^r/(\lambda_1 + \vartheta)$. For small $\vartheta$, say $\vartheta < \lambda_1$, this gives a lower bound on $\Vert \mathcal{G} - \mathcal{G}_{\vartheta}\Vert^2\ge C\vartheta$ with $C = C(\lambda_1,r)>0$, which decays at best linearly in $\vartheta$, even if $r>1$. Replacing $\vartheta$ by $\lambda$ thus strongly suggests that our upper bound of the form $\lambda^{r\wedge 1}$ for the RFM should be accompanied by a (matching) lower bound of order $\lambda$ for $r>1$.
3. Generically, the output dimension does not affect the convergence rates in the vector-valued setting, only the constants in the bounds (e.g, through the norm on the space $\mathcal{Y}$). Indeed, when $\mathcal{Y}$ is even infinite-dimensional, we show in the numerical experiments (attached figure pdf file) that the error is not sensitive to discretized output dimension $p\gg 1$. However, there exists other problems where the constants may blow up as $p\to\infty$, which would lead to a vacuous error bound. The underlying mathematical objects should have a well-defined meaning in this limit (e.g., taking the discretization resolution to infinity) to prevent this from happening.
In general, as alluded to in the conclusion of our submitted manuscript, we agree with the reviewer that determining conditions under which faster rates can be achieved is a very interesting and practically relevant question, which we plan to follow up on in future work. In particular, we believe faster rates can be achieved in our analysis, under certain conditions on the probability measure on random features, and utilizing the concept of local Rademacher complexity; we would like to mention that the results of our paper could have been obtained by invoking global Rademacher complexity estimates (this is implicit in certain steps of our proofs).
We sincerely hope that we have addressed the concerns of the reviewer to your satisfaction.
* [SGT18] Sun, Y., et al., "But how does it work in theory? Linear SVM with random features", *Advances in Neural Information Processing Systems*, **31**, (2018)
---
Rebuttal Comment 1.1:
Title: Thx for the response
Comment: Thank you for providing the point-to-point response, which are satisfactory to me.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for your reply. We are at your disposal during the discussion period if you have any further questions. | Summary: This paper presents a theoretical error analysis of infinite-dimensional input-output ridge regression with vector-valued random features. This one is the first analysis adapted to infinite-dimensional outputs and even improves existing results for finite-dimensional outputs. Several by-products come with the main error bound, such as strong consistency in the misspecified setting and minimax optimal convergence rate in the well-specified setting. In addition, in the well-specified setting, the proposed analysis provides the sharpest parameter complexity, that is square root of the sample size. The analysis is based on a new proof technique, which is sketched in the last section.
Strengths: The subject addressed by the authors is topical since there is a lack of theoretical analysis of random Fourier methods with infinite-dimensional output in the literature, and that the random Fourier technique is definitely the most practical way to speed up kernel methods.
The contribution of this work is substantial since, besides providing the first result for infinite-dimensional outputs, it also presents an improved parameter complexity ($\sqrt N$, where $N$ is the sample size) with respect to the literature ($\sqrt N \log N$).
Moreover, this paper is very well written: there is no typo and a clear effort has been made to present clearly the theoretical analysis. Remarks and examples help understanding the results and generally anticipate the questions that may arise while reading the manuscript.
Weaknesses: Except for a minor remark (RF-RR Lines 48, 74, 81 is used before being defined Line 112), I think that the paper has no flaw. I must admit that I am used to seeing a numerical experiment confirming the convergence rate obtained theoretically, but I understand that choices have to be made to fit the page limit and I am quite grateful to the authors to try to explain the successive steps of their proof instead. In a way, the paper is more consistent this way.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1) Could the authors give some details regarding the relaxation of the independence assumption in light of the proof outline (Section 4)?
2) Why the notation $\Phi(u; \hat \alpha, \{\theta_m\})$ is preferred to $\Phi(u; \hat \alpha)$ before Example 3.9, and the converse after? Besides, why Theorem 3.7 is named as such rather than as a corollary?
3) Could the authors explain the assumption $\mathcal G \in \operatorname{Im}(\mathcal K^{r/2})$ and its implications?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Overall, technical assumptions are discussed. Nevertheless, Societal impact is not addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We start by thanking the reviewer for your appreciation of the merits of our paper and your welcome suggestions to improve it. We will fix the use of RF-RR acronym in a revised version of the paper, thank you for catching that. Additionally, based on suggestions from the other reviewers, we have included some numerical experiments to visualize the convergence behavior of vector-valued RF-RR and compare it to our theory (see the attached figures pdf file).
Below, we address the specific concerns raised by the reviewer and thank the reviewer in advance for their patience in reading our detailed reply.
1. Regarding the independence assumption, we thank the reviewer for directing our attention back to this point; in fact, thanks to the reviewer's suggestion, we have now realized that our derivation goes through almost verbatim, with only two minor modifications to the proofs in Appendix D.1, when only assuming that the data pairs $(u,y)$ are of the form $y = \mathcal{G}(u) + \eta$, where the noise $\eta$ is a subexponential random variable such that the condition expectation $\mathbb{E}[\eta|u] = 0$. This allows us to completely eliminate the independence assumption in our revised version. Thank you!
2. We define the equivalence of the two notations in line 104. However, we agree with the reviewer that our use is not consistent. The notation $\Phi(u;\alpha,(\theta_m))$ is introduced to emphasize the dependence of the model on the realizations $\(\theta_m)_m\sim\mu^{\otimes M}$, and Thm. 3.4 explicitly mentions the $(\theta_m)$ in its hypotheses. However, the remaining results do not explicitly mention $(\theta_m)$ in the theorem statements so we will consistently use $\Phi(u;\alpha)$ after Thm. 3.4 in the revision.
Also, we choose to name Thm. 3.7 as is instead of Corollary 3.7 because it is the highlight/takeaway result of the paper that is easiest to compare to prior work (as in Table 1) and explain to general audiences.
3. Regarding the assumption $\mathcal{G} \in \mathrm{Im}(\mathcal{K}^{r/2})$, we view it as a regularity assumption on the underlying operator. In specific settings, when approximating an underlying function in finite dimensions, it corresponds to a fractional (Sobolev) regularity of the underlying function. In general, it appears difficult to check this condition, which is a definite limitation of this result, and we will point this out more explicitly in our revised version.
We sincerely hope that we have addressed the concerns of the reviewer to your satisfaction.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for answering my few questions and for going beyond by adding a numerical experiment supporting the theory.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for your reply and we are at your disposal during the discussion period if you have any further questions. | Summary: This paper investigates the theoretical aspects of learning vector-valued operators with random features. Specifically, it studies the convergence of the random feature estimate $\Phi(u;\hat{\alpha})$ to the true underlying function $\mathcal{G}$. Its main results are:
Theorem 3.4: Error bound for fixed $\lambda$ and fixed $N$.
Theorem 3.7: The same error bound in the well-specified case $\mathcal{G}\in\mathcal{H}$.
Theorem 3.10: Almost sure consistency of the random feature estimate to the true function $\mathcal{G}$ as $N\rightarrow\infty$ and $\lambda\rightarrow0$.
Theorem 3.12: The convergence rate.
Strengths: The paper is entirely a theoretical paper, so the value of the paper should lie in the significance of the results and the correctness of its proofs. I believe that the stated results would be good contributions (although see Questions for Theorem 3.10), but perhaps quite limited in scope. Unfortunately, I have not had time to go through all of the proofs in detail (perhaps I will in the subsequent reviewing process).
Weaknesses: The paper is in general well-written, and the mathematics is presented quite clearly. However, its structure makes it rather difficult to read. For example, when reading the main body of the paper, we are led to the appendix several times for definitions that are required to read on in the main body. This happens on L151, L177, L179, L222 and L240. Having proofs in the appendix is fine, and is often done, but the way we are led to the appendix in this paper disrupts the flow a bit in my opinion. Also, the proofs are not in one place - we have to jump back and forth between Section 4 and the Appendix to read the proof of the results.
Also, the results are very abstract, and I think it would have been much better if some concrete examples of $\phi$, $\Phi$, $\theta$, $\alpha$ and $\mathcal{H}$ were given. An example of a learning algorithm to which the results could have been applied would have been even better.
Minor comments:
L228: I think the superscript $\lambda$ should be $\lambda_k$.
L614, displayed equation: The squared brackets after $\mathbb{E}$ are not closed.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Theorem 3.10 and Corollary 3.13: It is, at least in my limited experience, rare to see almost sure statements in statistical learning theory. Could the authors please comment on what made it possible in this case, or whether they believe a simple Borel-Cantelli argument should make all of the results in statistical learning theory almost sure statements?
Also, none of the results seem to depend on the distribution $\mu$ of $\theta$, and always just the number $M$ of them. I could be mistaken, but this is hard to believe - what if $\mu$ put all mass on one point? I didn't seem to be able to find in Assumptions 3.1, 3.2 and 3.3 that prevented something like this.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors barely discuss the limitations of their work, except as "future work" in the Conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We start by thanking the reviewer for reading our paper and for their comments. We have fixed the typos in the revised version of our paper. Regarding the reviewer's criticism of the structure and readability of the paper, we agree that splitting into proofs in appendix and theorem/lemma statements in the main body make a thorough reading difficult. To ease the burden on the reader, in the revision we will: 1) use the ``thm-restate'' LaTeX package that will repeat the full theorem/lemma statements immediately above their proofs in the appendix, so no more need to flip back and forth; and 2) Submit the full paper (main body plus supplement) as the official supplementary materials so that all hyperlinks work as intended when reading results, proofs, and definitions.
Additionally, we have now included a concrete operator learning benchmark example (Burgers' evolution operator) where our RF ridge regression algorithm is actually implemented. Here, $p=\infty$ and a nontrivial choice of $\varphi$, $\theta$, and $\Phi$ are used in the regression problem (see the attached pdf figure file for the details).
Below, we address the specific concerns raised by the reviewer and thank the reviewer in advance for their patience in reading our detailed reply.
1. Regarding the reviewer's question on Borell-Cantelli, we believe that it is indeed the case that non-asymptotic, high-probability results can often be turned into almost sure asymptotic results. However, there are certain draw-backs to such almost-sure asymptotic results. For example, even though the asymptotic behavior is guaranteed to occur ``eventually'' (with probability 1), it is completely unclear whether we should ever expect to observe such rates at practically relevant numbers of data-pairs and random features. In this sense, we view our asymptotic consistency result as a considerably weaker assertion than the high-probability estimates that it is derived from.
2. Regarding the reviewer's comment about the dependence on the underlying measure $\mu$, it is indeed true that our results do not make any specific assumptions on $\mu$, as such. However, $\mu$ implicitly determines the underlying reproducing kernel Hilbert space (RKHS), via the random feature map $\varphi$. In particular, if $\mu$ is concentrated on a single point, then this RKHS is necessarily at most one-dimensional, and hence our results only imply convergence of the RFM to any operator *belonging to this one-dimensional subspace*, in this specific case. More generally, our results require either that the underlying operator belong to the RKHS or that it at least belong to the $L^2_\nu$-closure of this RKHS. This can be viewed as a compatibility requirement between the operator and the RFM; the performance of the RFM should absolutely be expected to depend on the "degree of compatibility between the operator $\mathcal{G}$, the measure $\mu$ and the random feature map $\varphi$". We thank the reviewer for this pertinent question, and will include a more detailed discussion of this point in our revision.
We sincerely hope that we have addressed all your concerns and kindly request the reviewer to update their assessment accordingly.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for your answers to my questions, and the effort to demonstrate a concrete example in the Figure provided. I am quite satisfied with the answers to my questions, and I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their comments as well as for increasing our score. | Summary: The paper proposes a statistical analysis of the risk associated to learning with vector-valued random features (vv-RF) in the context of ridge regression.
The analysis shows that $\sqrt{N}$ random features are enough to attain a $\mathcal{O}(\frac{1}{\sqrt{N}})$ squared error (matching minimax rates) and strong consistency in the well-specified setting, as well as slower convergence rates related to how well the target function can be approximated in the RKHS in case of misspecified setting.
Strengths: - Topic is highly relevant to the machine learning community
- Paper is remarkably well written despite the technical complexity
- The paper is mathematically sound
- Assumptions are mild and very standard
Weaknesses: Contribution (C4) is a bit exaggerated in my opinion, see questions.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In the contributions listing, you mention in (C4) that the proof approach is not specific to RR. While I agree that you make no use of the closed form solution, you still make use of the fact that the loss is the squared loss (e.g. in 4.4), and the analysis would not hold if another loss was chosen. Same remark applies in the conclusion when you mention that future works could focus on general $L^p$ losses. If the analysis is not specific to the RR setting, for which class of losses does it hold ?
And more of a remark: it would be good to include a non trivial example of a feature map $\phi(x, \theta)$ used to solve a problem where $p = \infty$.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: - Limitations are well discussed in the paper, expect perhaps the point about the specificity to the RR that is mentioned in the "Questions" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We start by thanking the reviewer for your appreciation of the merits of our paper and your welcome suggestions to improve it. Below, we address the concerns raised by the reviewer and thank the reviewer in advance for their patience in reading our detailed reply.
1. We believe the reviewer's comment about contribution (C.4) is a very fair criticism, as certain steps in our derivation, including identity (4.4), are indeed specific to a quadratic loss. In view of this, we will tone down the statement of (C.4) in a revised version of the paper. We plan to expand on error bounds for the RFM for more general loss functions in future work. At least in the absence of noise, our intuition is that the RFM estimates should exhibit the same scaling as estimates for Monte-Carlo sampling; based on this intuition, we fully expect generalization to $L^q$-type loss functions to be possible since the Monte-Carlo error is independent of $q$. When allowing for noise, significant adaptations of our arguments may indeed be necessary, as equation (4.4), which is specific to a quadratic loss, is then important to our present approach.
2. A non-trivial example of a feature map in the $p=\infty$ infinite-dimensional output space case is given by the Fourier space features [NS21, Eqn. 3.5] for the Burgers' equation solution operator. It leads to a fully correlated (non-diagonal) limiting operator-valued kernel (hence non-trivial), which is important for the setting $p=\infty$. We include numerical results for this problem in the attached pdf figure file, and will add this concrete example to the revised version of our paper.
We again thank the reviewer for appreciating the merits of this work, and sincerely hope to have addressed your concerns.
* [NS21] Nelsen, N.H., Stuart, A.M., "The random feature model for input-output maps between Banach spaces", *SIAM Journal on Scientific Computing*, **43**(5), A3212--A2343, (2021)
---
Rebuttal Comment 1.1:
Title: Acknowledging rebuttal
Comment: I thank the authors for their answer. My original assessment still holds, I advocate for acceptance.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their reply, and for appreciating the merits of our work. We remain at your disposal during the discussion period if you have any further questions. | Rebuttal 1:
Rebuttal: At the outset, we would like to thank all five reviewers for their thorough and patient reading of our article. Their fair criticism and constructive suggestions will enable us to improve the quality of our article. If accepted, a revised camera-ready version of the article, with changes as outlined below, will be uploaded. We proceed to answer the points raised by each of the reviewers individually, below. We also attach one page of figures that show the results of numerical experiments suggested by the reviewers (with code to be made publicly available on Github). The literature references in this figures document refer to the two papers below.
Yours sincerely,
Authors of "Error Bounds for Learning with Vector-Valued Random Features".
* [KLLABSA23] Kovachki, N., Li, Z., et al., "Neural operator: Learning maps between function spaces with applications to PDEs", *Journal of Machine Learning Research*, **24**(89), 1--97, (2023)
* [NS21] Nelsen, N.H., Stuart, A.M., "The random feature model for input-output maps between Banach spaces", *SIAM Journal on Scientific Computing*, **43**(5), A3212--A2343, (2021)
Pdf: /pdf/5391321cdfab958c307a232068072a4f6ccb0abe.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a learning theory for ridge regression in random feature models in the setting when the input-output map $\mathcal{G}: \mathcal{X} \mapsto \mathcal{Y}$ is vector-valued (potentially infinite-dimensional). The model under consideration is a random feature model $\phi: \mathcal{X} \times \Theta \mapsto \mathcal{Y}$, such that the feature map $\Phi(\cdot)$ consists of a linear combination of RFs $\Phi(\cdot) = \sum_{m=1}^M \alpha_m \phi(\cdot, \theta_m)$, with different random weights sampled from $\theta_1, \dots, \theta_M \sim \mu$. The parameters $\alpha_1, \dots, \alpha_M$ are fitted by minimizing the regularized empirical risk $\frac{1}{N} \sum_{n=1}^N \lVert y_n - \Phi(u_n) \rVert_{\mathcal{Y}}^2 + \frac{\lambda}{M} \lVert{\alpha}\rVert_2^2$ given data sampled from some data generating distribution $(x_i, y_i) \sim \nu$. The assumed hypothesis class is an RKHS that consists of functions $\mathcal{F} \in \mathcal{H}$ given by weighted averages of the RFs $\mathbb{E}_{\mathcal{P}(\Theta)}\lbrack \alpha(\theta) \phi(\cdot, \theta)\rbrack$ for $\alpha(\cdot) \in L^2_\mu(\Theta, \mathbb{R})$. However, the considered setting also allows for model misspecification $\mathcal{G} = \mathcal{G}_\mathcal{H} + \rho$, such that $\mathcal{G}_\mathcal{H} \in \mathcal{H}$ and $\rho$ is almost surely bounded. The observation noise $y = \mathcal{G}(u) + \eta$ is modelled by a subexponential distribution allowing for heavier noise than the subGaussian.
Given the previous setting, the main result seems to be Theorem 3.4, which bounds the population squared error $\mathbb{E}_\nu\lbrack \mathcal{G}(u) - \Phi(u)\rbrack$ in terms of various quantities relating to the functions $\mathcal{G}, \mathcal{G}_\mathcal{H}, \rho$, the observation noise $\eta$ under mild conditions which roughly state that $M$ is of order $\sqrt{N}$. In contrast to previous work, the parameter complexity $M$ is free from logarithmic factors, which gives the lowest parameter complexity so far. As an application of the previous bound, the authors results regarding 1) strong statistical consistency of the model, that is, almost sure convergence in mean square of $\Phi(\cdot)$ to $\mathcal{G}(\cdot)$, and 2) corresponding explicit convergence rates. Sketches of the proofs are given in Section 4 to guide the reader along the main steps.
Strengths: The paper is well-written and coherent. The area, theoretical properties of infinite-dimensional RF models, is relevant to (contemporary) applications in scientific computing, such as learning the solution operator of PDEs using RF or neural models. The presented results are highly technical and non-trivial to derive. In addition to generalizing the theory of RF models to this setting, the RF parameter complexity is also sharper by a logarithmic factor as opposed to previous works, thatworked with stronger assumptions on the output domain. The presentation is good-ish, I think they did a good job in outlining the setting, then the main results, and presenting the key steps of the proofs in section with full proofs deferred to the appendix.
Weaknesses: As a theoretical work, there is no empirical part to speak of. This, on the one hand is common in these types of papers, but on the other, it makes the paper less accessible to a majority of the NeurIPS community. Although it seems like such RF models do have important applications, I am unsure about the size of the community that is involved in these kinds of niche results, as the main application only seems to be in operator learning.
One other point is that although there are applications listed, the mathematical objects considered in the paper still seem kind of abstract, I think giving more examples and connecting the dots more to the applications would help in this aspect (see question 1).
Lastly, no illustrations are provided whatsoever to support comprehension or visualize the results. Some figures would go a long way to make the paper less dense (see question 2)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - As being familiar with random feature models, I understood the main premise of the paper, but I was still left wondering about how one could construct feature maps into such infinite-dimensional spaces. I think giving some examples about what kind of RFs were used before for operator learning, on what kind of data, etc would go a long way.
- I was wondering if it would be possible to visualize the various hyperparameter complexities and empirically validate the convergence rate appearing in the theorems to have some visual intuition either using synthetic data or one of the datasets appearing in previous work (if the true operator is known)? Another idea which would also help in structuring the results and aid in understanding them is to draw a flowchart about the previous theorems and lemmas showing how they depend on each other.
Minor:
- In line 32, when the authors state the size of RF matrices is quadratic in the number of features $M$, do they actually mean the covariance matrix that has size $M \times M$, or the $N \times M$ feature matrix? In the latter case I am unsure where the quadratic factors comes in.
- In line 228, there might be a $k$ missing in $\mathfrak{R}_N^{\lambda_k}$, otherwise $\hat \alpha$ does not actually depend on $k$
- In Apprendix A, line 489, the authors define the subexponential norm in terms of the moments, which is slightly uncommon, since it's standard to define it in terms of the exponential Orlicz norm. It might be beneficial to state the original definition first, and relate that
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As a purely theoretical work, there is no direct societal impact commonly associated with AI models. The authors have addresed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We start by thanking the reviewer for your appreciation of the merits of our paper and your welcome suggestions to improve it. Below, we address the concerns raised by the reviewer and thank the reviewer in advance for their patience in reading our detailed reply.
1. The construction of "good" random feature pairs $(\varphi,\mu)$ in the infinite-dimensional output space setting is still an open problem. This is partly because there are no ``canonical'' operator-valued kernels (OVKs) $K\colon\mathcal{X}\times\mathcal{X}\to\mathcal{L}(\mathcal{Y};\mathcal{Y})$, which the random features approximate. In contrast, canonical kernels in the scalar output setting include squared exponential and Mat\'ern kernels, which have interpretable hyperparameters such as lengthscales, regularity, and variances that may be adapted to the problem. In the operator learning setting, handcrafted vector-valued random features were designed in [NS21, Sec. 3] for specific PDE problems on a case-by-case basis. In [KDPCRA16], separable OVKs $K(u,u')=k(u,u')T$ were used, where $k$ is a canonical scalar kernel (hence RF approximations of $K$ are then straightforward if one for $k$ is known) and $T$ is a bounded linear operator on $\mathcal{Y}$. Both papers work with functional data. More general constructions would require novel ways to adapt such features or kernels to data.
2. To empirically test the validity and sharpness of our theory, we take the reviewer's suggestion and implement a Burgers' equation operator learning benchmark for a range of sample sizes $N$, random features $M$, and data resolutions $p$. The dataset appears in previous work on operator learning. This example does not necessarily satisfy our theoretical assumptions because we cannot verify that the Burgers' solution operator $\mathcal{G}$ is an element of the RF's RKHS. Also, the feature map uses an unbounded activation function (ELU), while our theory is only developed for bounded RFs. Nevertheless, the observed parameter and sample complexity reasonably validate the theory.
3. Regarding the reviewer's suggestion of including a flowchart illustrating the interdependency of our results, we agree that this would greatly help in providing a quick overview of our theoretical results. We thank the reviewer for this suggestion, and will include it in a revised version of our paper.
Regarding the minor comments:
* By quadratic, we do mean the RF ``gram/covariance-like'' matrices $k_{ij}=\frac{1}{N}\sum_{n=1}^N\langle \varphi(u_n;\theta_i),\varphi(u_n;\theta_j)\rangle_{\mathcal{Y}}$ that are of size $M$ by $M$.
* We thank the reviewer for pointing out the typo in Line 228.
* In the revision, we will cite the standard definition of exponential Orlicz norm that the reviewer mentions (e.g., Vershynin's book).
We sincerely hope to have addressed the concerns to your satisfaction and thank the reviewer again for pointing them out to us.
* [NS21] Nelsen, N.H., Stuart, A.M., "The random feature model for input-output maps between Banach spaces", *SIAM Journal on Scientific Computing*, **43**(5), A3212--A2343, (2021)
* [KDPCRA16] Hachem, K., et al., "Operator-valued kernels for learning from functional response data", *Journal of Machine Learning Research*, **17**, 1-54, (2016) | null | null | null | null | null | null |
On skip connections and normalisation layers in deep optimisation | Accept (poster) | Summary: This paper studies optimisation in deep architectures, and presents a framework for theoretically capturing the role of architectural components like skip connections and normalisation layers/weight normalisation. This framework considers the curvature (smootness) and regularity properties of multi-layer networks (networks that are compositions of per-layer blocks), but differs from existing results for deep NN optimisation as it enables convergence to be shown for trajectories far away from initialisation (useful for settings when global minima only exist at infinity, like with Cross entropy loss). Finally, the authors use their theoretical results to argue that the empirically observed optimisation benefits of skip connections may be due to improving the conditioning of the loss landscape, and reinforce this argument through motivating modified skip connections which (slightly) improve performance empirically in practice.
Strengths: - The paper is well motivated; it address an important question (that of bridging the gap between theory and practice in deep learning optimisation)
- The paper is well written and places itself clearly and fairly within the existing literature (in section 3)
- The paper makes contributions towards answering this question, such as: 1) building upon existing work in optimisation theory to enable global analyses that can hold for convergence far from initialisation, and 2) connecting their theoretical results to practical considerations concerning architectural design, e.g. skip connections.
Weaknesses: - I'm not sure the extent to which the theoretical bounds e.g. showing the benefits of skips in terms of regularity (i.e. looking at minimum eigenvalues) translate into practice, due to the assumptions made that may not hold in practice, and also the fact that they are worst-case bounds (see lines 265-271 for the authors own words on this). I should add that this isn't really a huge criticism of this work as it applies to most works in this area.
- I'm not sure if the statement in section 6 'skip connections aid optimisation by improving loss regularity' is itself a big contribution. As the authors write in lines 138-146, there have been several works to make this statement/similar statements. Moreover, as far as I understand the theoretical arguments in Theorems 4.7 and 5.1, the arguments goes like: "Each residual layer's jacobian can be written as <identity + something bound to have eigenvalues norms less than 1>, which will have positive minimum eigenvalue, and by the chain rule their product will be too", which is quite a simple argument (forgive me if I have misunderstood, and of course it is good to formalise these things).
- The experimental results seem a bit thin, particularly I'm a bit hesistant to read too much into comparisons of training speed on ImageNet if learning rates haven't been tuned (and indeed it seems most hyperparameters were not tuned for the CIFAR results, in Appendix A.1). It would also be good to verify the empirical gains of the proposed skip architectures across different optimisers (e.g. Adam), and more depths/different residual architectures like PostAct, and also other data settings (e.g. language tasks with residual transformers), to be more convincing empirically.
- I think the Deep Kernel Shaping recent line of work (https://arxiv.org/abs/2110.01765, https://arxiv.org/abs/2203.08120) is highly relevant, and complements the current paper, so probably should be cited. Likewise, the reparameterised VGG works (https://arxiv.org/abs/2101.03697, https://arxiv.org/abs/2205.15242).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the authors comment on the fact that their results are designed for parameters converging to global minima at infinity, with the fact that they require bounded weights (or normalised weights/normalisation layers) in their analysis?
- Can the authors clarify what they mean by 'I is an average pool', do you pool over all pixels or only within a set filter size? It would be good just to provide a formula for this.
- Please can the authors respond to my comments in the weaknesses section. I will be happy to update my score if the rebuttal can address these.
**Update: after clarifications in the rebuttal, additional experiments, and reading the other reviews, I am increasing my score to 7**
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please see weaknesses, though the authors are quite fair about the limitations of their work and also future directions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their attentive and detailed critique.
*I'm not sure the extent to which the theoretical bounds…* You’re correct that the theoretical bounds are not practical. Thank you though for recognising that this is not a unique problem with our work, and is common to virtually all works in this area so far, many of which have been published in at venues such as NeurIPS and ICML.
*I'm not sure if the statement in section 6…* We should have been clearer in our wording here. The “regularising effect” to which we are referring in Lines 138-146 is of a different kind to the “loss regularity” we refer to in Section 6. Specifically, by “loss regularity” in Section 6, we are referring precisely to a larger value of $\|\nabla\ell\|^2 / \ell$ (i.e. a larger PL coefficient), which therefore relates directly to optimisation theory and convergence speed. While the other works cited in Lines 138-146 study “regularising” effects of skip connections, they do not make the specific connection to loss regularity in the form of a larger PL coefficient that we do. In particular, to our knowledge no other works have yet observed the improved training speed that we predict (and observe empirically) based on this better PL regularity. We will change our wording in Lines 138-146, as well as in Section 6, to clear up this confusion.
*Moreover, as far as I understand the theoretical arguments in Theorems 4.7 and 5.1…* Your understanding of the argument for Theorem 4.7 is essentially correct, and we agree that the argument here is relatively simple. However the same argument does not apply in Theorem 5.1. Theorem 5.1 already assumes that one has a positive PL coefficient everywhere (which the argument in Theorem 4.7 gives), and then proves that convergence to a global minimum is still possible even when this PL coefficient is not uniformly bounded below by a positive constant (i.e. is not a PL constant), and decays to zero as training progresses. The argument for Theorem 5.1 is significantly more involved, requiring a number of estimates on the rate of decay of the PL coefficient relative to the rate of growth in parameter norm as the trajectory tends towards a global optimum at infinity. It turns out that in order to get convergence to this optimum, it is necessary that a certain infinite series diverges. We are able to show that the PL coefficient goes to zero just slowly enough that this infinite series does indeed diverge, and asymptotic convergence of training to a global optimum is assured. The full proof can be found in the appendix.
*The experimental results seem a bit thin…* To alleviate your concern, we have run the same learning rate sweep for ImageNet (to the extent our computational resources and time constraints allowed), showing that the claimed convergence improvements indeed holds across different learning rates where convergence is possible at all. We will work on having a more complete sweep done for the final version, should our paper be accepted. Please note also that we have tested networks of different depths: we used six layer networks on MNIST, ResNet18 on CIFAR and ResNet50 on ImageNet, and we used standard training schemes commonly thought of as close to optimal. Since our theory only concerns simple gradient methods, we feel that adaptive methods such as Adam are outside the scope of predictions we can reasonably make. Moreover, our Hypothesis 6.1 cannot be applied to PostAct ResNet architectures due to the presence of an additional batch norm in the skip connection. Our hypothesis could be generalised to cover this, but such a generalisation would be out of the scope of this work.
*I think the Deep Kernel Shaping recent line of work…* We agree, these works are certainly relevant. We will cite them in the next version of our paper.
*Can the authors comment on the fact…* In order for us to be able to guarantee Lipschitz gradients and positive PL coefficient globally, it was necessary for us to assume that all weights were normalised. One consequence of this is that the outputs of the network are bounded functions of parameters for any given training set. Restricted to this bounded set of possible outputs, the cross-entropy cost attains its global minimum at points on the boundary of this set. In particular, the global minima of cross-entropy on this bounded set are not zero. The preimages of these boundary points are always points at infinity in parameter space due to the nature of normalisation formulas: the function $x\mapsto x/\sqrt{\epsilon + x^2}$ is bounded, taking values in the interval $[-1,1]$, with the extreme values 1 and -1 only being attained at $x = \pm\infty$ respectively.
*Can the authors clarify what they mean by 'I is an average pool'…* In PyTorch code, this average pool is just an nn.AvgPool2d, whose filter size and stride are both equal to the stride of the corresponding 1x1 convolution. We thus pool only over a fixed filter size. We will include more details in the next iteration of the paper.
We hope we have been able to adequately address your concerns in the Weaknesses section.
---
Rebuttal Comment 1.1:
Title: Thanks, a few more questions
Comment: Thank you for the reply to my review. I have a few more questions which I would appreciate replies to:
1. In Theorem 5.1, can you provide an intuitive explanation for what $\mu_t$ correspond to? The notation is suggestive of local PL-coefficients, but as I understand it these are chosen 'hyperparameters'. Also, I think it should be $\mu_i$ not $\mu_t$ in the equation below line 256.
2. What is the difference in order of LN computation in PostAct/PreAct ResNets? I am more familiar with the PostLN/PreLN difference in Transformers so I may be misunderstanding here. In particular, PreLN is the one which has the normalisation inside the residual branch, whereas PostLN uses LN on the sum of the residual branches (outside of any single branch). Is PostAct=PostLN and PreAct=PreLN in your definition? I can also check a reference/figure if you have one. It is not obvious to me why PostAct/PostLN doesn't satisfy Hypothesis 6.1
Thanks
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their additional time and attention.
1. The $\mu_t$ are indeed similar to local PL coefficients: at each step $t$, $\mu_t$ is just the smallest singular value of the derivative of the parameter-function map (see Theorem 2.4) at the parameter $\theta_t$, and is therefore a lower bound for $\|\nabla\ell(\theta_t)\|^2/\ell(\theta_t)$, which determines the speed of convergence. It is not quite true that these are hyperparameters. They are instead determined by the architecture. And it is correct that it should be $\mu_i$ and not $\mu_t$ below line 256, thank you.
2. The problem we are referring to is only an issue with the 1x1-convolutional skip-connected blocks (i.e. the blocks that reduce the feature dimension: there are 4 in each ResNet architecture). This is because we only apply our Hypothesis 6.1 to these specific blocks, and leave the others alone. In the PreAct ResNet, these 1x1-convolutional skip-connected blocks have the form:
$$
x\mapsto c(x)+f(x)
$$
where $c(x)$ is the 1x1 convolution applied to $x$, and $f(x)$ is the residual banch (a composite of BNs, ReLUs and Conv2ds) applied to $x$. On the other hand, in the PostAct (original) ResNet, the 1x1-convolutional skip-connected blocks have the form:
$$
x\mapsto BN(c(x)) + f(x)
$$
where $c(x)$ and $f(x)$ are the same as above, and BN is an additional batchnorm. Our modification is to add an additional linear map $I$, with singular values equal to 1, to the 1x1 convolution $c$. In the PreAct case this yields:
$$
x\mapsto I(x) + (c(x)+f(x))
$$
while in the PostAct case this would yield:
$$
x\mapsto BN(I(x) + c(x)) + f(x) \neq I(x) + (BN(c(x)) + f(x))
$$
In the first case, our Hypothesis predicts an upwards-shifting of the singular value distribution of the Jacobian of this block, because we have effectively added $I$, with all singular values equal to 1, to the forward pass of this block. But due to the nonlinearity of BN, the same cannot be said for the PostAct case (note the "$\neq$" above), so that our Hypothesis does not apply. | Summary: Paper deals with convergence of deep learning making use of neural Tangent Kernels (NTK) theory and its recent advances. It is a theoretical work, claiming to provide ... "l: 7-11 Abstract: ... the only proof of which we are aware that a class of deep neural networks can be trained using gradient descent to global optima even when such optima only exist at infinity, as is the case for the cross-entropy cost." The proof, that holds for normalised networks with dense skip connections, uses an interesting approach showing that there exists sufficiently small learning rate that induces both gradient Lipschitzity and PL lower bound on gradient norm.
Interestingly the paper admits that main theorem (claiming above) is unrealistic for many reasons, i.e., more data points than input dimensions, not skipping every layer etc. The experimental results are to demonstrate that some ideas used in the main theorem could hold in practical settings as well, forming the Hypothesis 6.1.
Strengths: First I'd like to thank authors for the submission of interesting paper. It has well written introduction and background knowledge sections, recalling the key building blocks (Lipschitz gradients and PL inequality) and how are they treated in related works.
Also the approach of proving the main theorem as highlighted in summary is interesting. However its 'globality' is less surprising when normalisation layers are required (than any point is weight space is translated and rescaled to lie nearby origin in probability and thus near origin NTK theory applies).
Weaknesses: Impact: From my perspective and despite the nice work of proving the Main Theorem, the main weakness of this work lies in its impact. The Theorem relies on unrealistic assumptions, admitted by authors as well between lines 265-275. Then paper reduces to "just another" hypothesis why in "practice" deep learning converges to global optima.
Recall that regularisation of skip connections have been reported and theoretically argued in multitude of previous works (see, Related Work sections or Principles of riemannian geometry in neural networks, Hauser, Michael and Ray, Asok, Advances in neural information processing systems, vol. 30, 2017 just to mention one out of many uncited, for instance).
Clarity: To improve the impact the work would benefit from more accessible presentation. I leave it to authors to consider that for next iteration.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Q1: The paper strives to establish new toolkit for a convergence theory in DL. One way to increase the impact and importance for ML community would be to elaborate and suggest research directions newly opened up by this approach. Could authors elaborate on open problems briefly mentioned in Discussion and suggest the ways to leverage results in the paper to advance them?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 1 poor
Limitations: As per above, the main results relies on highly unrealistic assumptions and thus impact is limited mostly to 'proof technique' for certain problems within NTK realm. This could be powerful, but should be elaborated more, see Questions section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their polite critique.
We must first clear up what appear to be some misapprehensions in the reviewer’s interpretation of our work that are apparent from the Summary and Strengths section.
1. The reviewer seems to be under the impression that we are applying NTK theory for our convergence proof. What is typically understood by “NTK theory” is the theory developed in [1], wherein it is shown that for MLPs, a certain matrix-valued function of the parameters (which, confusingly, has become known in the community as the “tangent kernel”, perhaps contributing to the reviewer's misunderstanding), converges in the infinite-width limit to a constant matrix (called the “Neural Tangent Kernel” or NTK, to which the reviewer seems to be referring) at random initialisation. **We do not invoke this theory at any time in our paper.** First, our theory concerns much more general architectures than only MLPs. Moreover, our theory does not require infinite width, nor does it require this matrix valued function (the “tangent kernel”, as opposed to the *constant* NTK) be constant (i.e. **we do not work in the NTK regime**, as the reviewer seems to think). In fact, much of the theoretical work in our paper is devoted to accounting for the variation of the smallest singular value of this matrix-valued function as training proceeds. In particular, the globality of the result does not follow from NTK theory as claimed by the reviewer.
2. On a related note, the reviewer claims that we establish a PL lower bound on the gradient norm. This again is not accurate: as we state at Lines 107-110, when working with general cost functions such as the cross-entropy cost, a PL lower bound (in the sense of a uniform lower bound) is impossible. The best we can do is show that the quotient $\|\nabla\ell\|^2/\ell$ (to which the PL inequality gives a uniform lower bound) never vanishes. In fact, a good deal of the proof of our main theorem is spent showing how convergence to a global minimum is possible despite not having access to a uniform PL lower bound. Ours is the first work to propose such a technique.
Now onto the Weaknesses:
*Impact: From my perspective…* The critiques made regarding the strong assumptions could be made at least as forcefully about all other papers in the DL optimisation theory literature, over which our paper in fact makes several improvements as outlined in Lines 27-58. Many of these papers are justifiably published in outlets of similar calibre to NeurIPS, and despite strong assumptions have thousands of citations between them. It is inaccurate to say that they have had low impact. To say that our paper should be rejected based on strong assumptions alone is to say that NeurIPS shouldn’t be accepting DL optimisation theory papers at all, which we believe would be a great disservice to the field.
*Recall that regularisation of skip connections have been reported…* We make no claim to be the first to have studied the effects of skip connections generally, however we do believe that we are the first to have made the link between the inclusion of skip connections and improvements to loss regularity *in the precise sense* of a larger value of $\|\nabla\ell\|^2 / \ell$. The Hauser et al paper brought up by the reviewer, while very interesting and known to us, studies skip connections from a quite different perspective which we did not believe was relevant to our work. It was for this reason that we did not cite it. If the reviewer is alleging lack of novelty then we would very much like to know what papers we have missed that are making genuinely similar claims to our own.
*Q1: The paper strives to establish new toolkit for a convergence theory in DL…* Beyond the results themselves, we believe our work has two primary contributions which will be of use to others working in DL convergence theory.
1. The framework itself. Our paper is the first we are aware of which abstracts from the any specific architectural choices, as considered in previous works, to give a general framework capable of treating all network layers on a common theoretical footing. Our framework is modular in reducing important properties of the total loss landscape to properties of the individual layers, where conditions are more easily checked, and is the first framework which allows this. Our main theorem is proof of concept of this general approach which we believe will be necessary in moving DL optimisation theory beyond the MLP examples typically considered and into more practical settings. We believe this modular approach may prove useful when trying to incorporate contributions from all layers into the estimation of PL-type bounds, which we stated as an open problem in our Discussion.
2. Our new proof technique removes the need for a uniform PL bound. Prior to our work, all works on the optimisation of deep nets have required sufficient conditions for the optimisation trajectory to stay close to initialisation, so that a uniform PL bound computed at initialisation can be used to quantify loss decrease throughout training. These assumptions are violated for large learning rate schemes used in practice and mentioned in our Discussion. In contrast, our new proof technique is the first which permits proof of convergence without such a uniform PL bound and without insisting that the optimisation trajectory stay close to initialisation. As the only technique we are aware of that can handle such weak assumptions, we believe that it may be of utility in extending DL optimisation theory to more practical training regimes involving edge of stability.
Should the paper be accepted, we will include this discussion in the extra space provided for the final iteration.
[1] Jacot et al, Neural Tangent Kernel: Convergence and Generalization in Deep Neural Networks, NeuIPS 2018
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. Including the correcting comments on NTK and PL bounds.
It was not claimed your were establishing $\textbf{uniform}$ bound, however. What's meant is using PL-ineq. step-wise and thus obtaining "never vanishes" for finite $T$. The point is that involving $\textit{normalisation}$ enables to apply "NTK"-like techniques step-wise (after rescaling by normalisation, which keeps the "normalized" weights closed to the origin, despite arbitrary large absolute values - the main contribution points of your approach, I guess) and taking the limit.
Also, cited work on skip connections [Hauser] takes indeed a different geometric perspective, yet it shows that adding skip connections increases smoothness the loss landscape, and thus regularity. I agree a differential geometry perspective is quite far from approach in the manuscript but may be relevant to be cited, due to same claims.
Not questioning the lack of novelty either. The opposite is correct, the novelty of your method has been appraised in the rebuttal. It is rather the Impact that was pointed out to be in question.
Overall and after reading your response, I can see that one can view the work from the opposite angle. That is, including the normalisation and skip connections and thus generalising previous convergence results to more general architectures is a relevant contribution, despite these two conditions make the proofs more straightforward. Also thank you for additional comments on Q1.
Overall, due to change of perspective above, I am ready to increase my rating by one notch to "4" giving more weight to other reviewers' opinions.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their further time and consideration. Should the paper be accepted, we will include citation and discussion of [Hauser] with the additional space provided.
---
Reply to Comment 1.1.2:
Comment: It seems the reviewer has not yet updated their score to 4, as they said they would.
With the deadline of the discussion period arriving soon, we ask that the reviewer please update their score at their earliest convenience. We thank the reviewer again for their time. | Summary: The paper provides a novel proof strategy showing the guaranteed convergence to a global optima if a particular class of deep neural networks with non-linear activation functions provided there are skip connections. Such results have been known for linear networks so it is nice that it also applies to non-linear networks even when using a cross-entropy loss where some of the weights might diverge. The paper goes on to show empirically that by adding in skip connections where the do not currently exist it is possible to speed up convergence of Resnets albeit marginally.
Strengths: This is a very accomplished paper, clearly written by people who know what they are doing. The paper provides a novel proof strategy to prove convergence.
Weaknesses: Although technically elegant, the proof strategy only currently works in the strictly over-parameterised regime. This is not a regime where most people work and it seems unlikely that convergence fails when we have more data-points. I would therefore class this as a proof strategy that unfortunately doesn't really work. Obviously, I might have misunderstood and would change my judgement if the authors could convince me that I am wrong.
The proof also seems to sacrifice linear convergence, which makes the proof slightly less interesting. Again, maybe I am wrong about this.
More generally it is a difficult judgement call as to the value of this paper. Clearly in deep networks people run in a regime where there is most likely many local and global optima and where they never run long enough to reach an optima (given the number of parameters, there use of mini-batches and typical size of the momentum parameter). It is also not clear that reaching a global optimum is desirable (early stopping was a common regularisation strategy in small neural networks---it maybe unnecessary in deep neural networks because we always stop early). In addition, Relu activation functions are usually found to give superior performance to those with Lipschitz gradients. It thus seems that the direction of this line of research is telling us little about deep learning optimisaton. I accept that it is interesting whether there exists a deep network with a unique minima that can be solved efficiently that demonstrates some of the power of deep learning. This could be shown empirically, but conditions such as strict over-parameterisation makes this less useful. I also appreciate the challenge involved in proving anything and I admire the attempt, but I still need some convincing that the direction of research is leading somewhere.
I found the empirical part of the paper slightly underwhelming. The effect of adding a skip connection did lead to systematic improvement, but rather minor. However, there are many explanations out there of why skip connections are beneficial and while it is plausible that the improvement is due to improved regularisation, I suspect other explanations could also be put forward (e.g. further breaking of symmetry).
This was by far the hardest paper I had to review and it took an order of magnitude longer to do it. Even then I am at my limit of understanding so my critique might be unfounded. If you can convince me then I am happy to change my score.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Is the regime where there are more training data than free parameters physically different (i.e. we would expect in some cases we would not converge)? If not is not the success of the proof a mathematical artefact rather than capturing the physics?
Is it the case that convergence might be sub-linear? If so are there any guarantees on the speed of convergence?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading of our work and for their kind words.
*Although technically elegant…* The reviewer is correct on the point that our proof technique only works in the strictly overparameterised regime. However, the reviewer is incorrect that most people do not work in this regime. This regime is common to all theoretical works on this topic that we are aware of, as well as being common to many practical settings such as image classification. All previous theoretical works also require at least one layer where the number of parameters is of at least the order of the number of training samples. There are mathematical reasons for thinking that overparameterisation is necessary for the ease of convergence of gradient descent on deep neural networks (see our response to your Question below).
*The proof also seems to sacrifice linear convergence…* For strongly convex cost functions such as the square cost, our Theorem is associated (by the usual arguments for PL functions) with a linear convergence rate. It is true, however, that for non-strongly convex cost functions (such as the cross-entropy cost), linear convergence is sacrificed. However, this is to be expected since even in convex optimisation, the best-known convergence rate for fixed steps-size gradient descent on non-strongly convex objectives is sublinear: only O(1/T), with T denoting the number of iterations [1].
*I still need some convincing that the direction of research is leading somewhere…* We are sympathetic with your concerns: similar concerns of our own on the limitations of existing theory motivated this work. However, it is important when evaluating the direction of a line of research to keep in mind where the line of research originated.
It seems to us that your concerns could be directed with even greater force against the theoretical papers on the optimisation of deep nets to which our own paper is responding, many of which are published in NeurIPS, ICML and conferences of similar calibre. As a rule, these works all fall short of capturing deep learning in practice, and as outlined in Lines 25-58 the purpose of our paper is to address some of these shortcomings and bring theory closer to practice. As stated in Lines 43-58, our paper is successful in this aim. We believe it will be of interest to other theoreticians in the pursuit of more practical assumptions as well as practitioners in the principled design of architectures. Insofar as theory papers still have a place at conferences like NeurIPS, we believe that our success in our aims justifies the publication of our paper.
*I found the empirical part of the paper slightly underwhelming…* We agree that the improvements were quantitatively small, but it must be remembered that we only changed four layers in each case, and these small modifications were made to architectures which are already known to be extremely easy to train (ResNets). Although the changes are small, to our knowledge no other paper in the deep learning optimisation theory literature (i.e. literature which has been concerned with proving convergence theorems) has been able to successfully make predictions about interventions on practical architectures.
We tried to rule out other possible explanations as much as possible with our experiments, for instance by running them at multiple learning rates, and by computing all relevant mediating quantities such as singular value histograms where possible (MNIST) and showing they behaved as predicted. However, the impossibility of ruling out all other explanations is a perennial problem in science. An experiment can never prove a hypothesis true: it can only ever prove a hypothesis false.
*This was by far the hardest paper I had to review…* We are sincerely grateful for your time. We know it is a difficult paper.
*Is the regime where there are more training data…* We would say that this regime is mathematically different. When there are a lesser number of parameters than training data, the system of equations one is trying to solve is overdetermined (i.e. has more constraints than degrees of freedom). Such systems generically have few if any exact solutions, and whatever solutions exist tend to be isolated in locally convex regions of parameter space. In contrast, when there are many more parameters than training data the system is underdetermined (i.e. has more degrees of freedom than constraints), and typically admits infinitely many solutions occurring in large, connected regions. We recommend Figure 1 of [2] for an intuitive picture of this. Deep learning often occurs in the overparameterised regime in practice, and this overparameterisation is universally used to account for ease of optimisation in theory.
*Is it the case that convergence might be sub-linear?* Our theorem gives linear convergence for the square cost, just as is achieved in comparable works. For the cross-entropy cost specifically, the convergence is indeed strictly sublinear, as is common with gradient descent on non-strongly convex objectives. We do not presently have a precise convergence rate: while such a rate would in principle be computable, since the assumptions of the theoretical part are impractical (albeit no more so than other theoretical works in this area) we felt that our time was better spent setting ourselves apart further from these other works by focussing on the more practical implications of our ideas, rather than on additional theoretical quantification.
[1] Dimitri P. Bertsekas. *Convex optimization algorithms.* 2015.
[2] Liu et al. *Loss landscapes and optimization in over-parameterized non-linear systems and neural networks*, arXiv:2003.00307
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed and candid response. I am sympathetic to theoretical papers getting published in NeurIPS even when they make unrealistic assumptions (proofs are hard), although it is important that they address a real issue and are not just mathematicians talking to themselves. It is an interesting insight that you might get converge to a unique optimum in the over-parameterised case, but not in the under-parameterised case (I was thinking about what convergence means in the wrong way). I am not very convinced that over-parameterisation is a common situation in image applications as image augmentation is nearly always used, but I accept that some sacrifice to reality is often required. I appreciate your attempt to break free of local regularity. It does seem surprising that such a large class of models converges to a global optima, although I need to reflect on whether this is an important observation. I am a bit slow so I need some time to consider if I should increase my rating of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for the consideration you are giving to our paper. We agree with your comment regarding image augmentation: perhaps this is one reason why minibatching is important, so that "effective overparameterisation" is available at each step.
We do not believe that we are only talking to ourselves with this paper. The point we were trying to make is that convergence analyses with more practical architectural choices (skip connections and normalisation) and the cross-entropy cost have been impossible with previously-existing techniques. Given the prevalence of these choices in practice, we felt this lack of analysis to be a real issue and worthy of our time in addressing it.
Thank you again for your time.
---
Reply to Comment 1.1.2:
Comment: With the discussion deadline coming up soon, have you had time to reconsider your assessment? | Summary: This paper proposes a formal theoretical framework to analyze gradient and convergence properties of multilayer deep networks. The authors make the following contributions: (1) they provide the first proof that certain deep networks can be trained to global optima at infinity using gradient descent; (2) the authors show that skip connections improve the loss regularity and thus improve the optimization speed; (3) they empirically show that improving the mean singular values of layer Jacobians can practically improve the convergence of ResNets on practical datasets like MNIST/CIFAR/ImageNet.
Strengths: The paper has following strengths:
1. The authors present a detailed study of smoothness and regularity of deep networks with skip connections and normalization layers. By connecting these properties to singular values of layer Jacobians, authors demonstrate that skip connections improve loss regularity. This insight is interesting, and the authors verified this empirically on practical networks on MNIST, CIFAR, ImageNet datasets.
2. The proposed theoretical framework may be useful for further theoretical studies on models with skip connections.
3. The authors also conducted an interesting experiment where they augment the 1x1 convolutions with a deterministic mapping which has singular values = 1 (they used average pool that was rescaled to give singular values = 1). The authors showed that this improved training convergence as well as the final loss for all networks considered.
Weaknesses: The paper has following weaknesses:
1. Some of the assumptions (as identified by the authors themselves) are not practical (lines 265-271). However, this is to be expected due to difficulty of this kind of theoretical studies. A few other assumptions include: d_{l-1} >= d_{l} (line 241) which generally does not hold (models usually have increasing number of neurons/feature maps). Also, in Theorem 5.1, can authors elaborate more on the implications of using convex cost function on the practicality of their theoretical results (line 252)?
2. Instead of using a deterministic map (avg pool rescaled to give singular values = 1), did the authors try directly using orthogonal initialization on the 1x1 convolutions in the skip connections? Would that significantly improve convergence compared to regular ResNets with default 1x1 initializations? Why did the authors use avg pool based parameterization of the skip connections?
3. There are a few important related works that the authors should discuss in section 3. Specifically, one paper defines “Layerwise Dynamical Isometry” [ICLR 2020, https://arxiv.org/abs/1906.06307] that talks about singular values of layer Jacobians and shows how improving those singular values can significantly improve training convergence of highly sparse networks. Another paper that is highly related defines NN-Mass [CVPR 2021, https://arxiv.org/abs/1910.00780] which explicitly relates singular values of layer Jacobians to skip connections. Although their theory is formulated for DenseNet-type networks, empirically, they show that their metric works for ResNets and MobileNets too. These methods show that such theory-grounded “training-free” metrics can identify efficient models directly at initialization without expensive training-based search methods. Other newer papers from this domain conduct full-blown zero-shot (training-free) Neural Architecture Search like ZenNAS (ICCV 2021, https://arxiv.org/abs/2102.01063), ZiCo (ICLR 2023, https://arxiv.org/abs/2301.11300), MAE-DET (ICML 2022, https://arxiv.org/abs/2111.13336), etc. These papers provide more insight towards expressive power or gradient properties of complex deep networks. Although the above papers may not have as much formal theory as the proposed work, it would be good to see relationship between more practical papers and the present theoretical study.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their attentive reading of our submission.
*Some of the assumptions…* We agree that the assumptions are strong, although they are not exceptionally strong in comparison to other theoretical works. It is also not entirely true that models usually have an increasing number of neurons: we point to ResNets as a counterexample, which, after the very first layer, have a non-increasing number of neurons in deeper layers. The cost functions that are most frequently used in practice are the square cost and the cross entropy. Since both of these are convex we do not believe this worsens the practicality of our results.
*Instead of using a deterministic map…* We did not think of trying orthogonal initialisation on the 1x1 convolutions. However based on our theory, we would predict a similar boost in training speed to what we see with our proposed modification, as we would expect it to have a similar affect on the singular values of the model, at least at initialisation. We used average pool skip connections because they aligned closely with our Hypothesis 6.1.
*There are a few important related works…* Thank you very much for the recommendations. We certainly agree that these related works are relevant and will include citation of them and some discussion with the extra space provided if our paper is accepted.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarifications. I will keep my original rating.
Comment: I have read the author response and other reviews. I will maintain my original rating. | Rebuttal 1:
Rebuttal: We refer the reviewers to the attached PDF for plots of our ImageNet experiment run at different learning rates, as requested by reviewer LNGS. Note that every trial we did of the modified network with the largest learning rate diverged. This is not surprising, given that it has been observed previously that adding skip connections increases the curvature of the loss [1]. At the other end, we see the same improvements at a learning rate of 0.05 as we predict by theory and as we observe in the 0.1 case already present in the paper. We were prevented from doing additional learning rates by time and computation constraints, and will do a more complete analysis in time for the final version if our paper is accepted.
[1] Ghorbani et al *An Investigation into Neural Net Optimization via Hessian Eigenvalue Density*, ICML 2019.
Pdf: /pdf/10ee7a178722a5b2a5f7a9701be7bbac2b16a736.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
ELDEN: Exploration via Local Dependencies | Accept (poster) | Summary: This work proposes `ELDEN, Exploration via Local DepENdencies`, a framework with an intrinsic reward that encourages the discovery of new interactions between entities such as the agent or objects that have some influence on each other. The method uses partial derivative of the learned dynamics to model the local dependencies between entities. The uncertainty of the predicted dependencies is used as an intrinsic reward. What distinguishes this from other related work, is that the focus for defining what makes a state interesting. Traditional approaches may focus on the specifics of how entities in an environment interact. In contrast, here interesting states are defined
based on whether the entities *can* interact, without focusing on the specifics of how the interaction occurs. Specifically, the algorithm is biased towards exploring states where the relationships between entities are not well understood.
To implement this, the authors train an ensemble of dynamics models. Within each model in the ensemble, local dependencies between objects are modeled using partial derivatives of state predictions with respect to the current state and action. Partial derivatives can capture how changes in state or action affect changes in predictions, which can be thought of as capturing local interactions. The uncertainty in local dependencies is then quantified as the variance across all the models in the ensemble where a high variance suggests high uncertainty.
For learning the object interactions themselves, they propose using a `Causal Graphical Model` to represent the transition at a given time step 't'. CGM consists of three parts: `1. Nodes`: These represent the state at time step 't' (S_t), the action at time step 't' (A_t), and the state at the next time step 't+1' (S_t+1). `2. Directed Graph G`: This graph illustrates the global dependencies between the nodes.
`3. Conditional Distribution 'p'`: For each state variable at the next time step (S_n_t+1), there is a conditional distribution 'p' which represents the probability of reaching that state given the current state and action.
The assumption is that the transition probability `P(s_t+1 | st, at)` can be factorized into a product of conditional probabilities for each state variable at the next time step. Here, `Pa(v)` refers to the parent nodes of a node `v` in the causal graph `G`. This means that the probability of transitioning to a new state depends on the current state and action, and can be represented as a product of several smaller probabilities based on the causal relationships.
Instead of using these global graph `G` which includes all possible dependencies, the authors use a local causal graph `G_t`.
This Local Causal Graph Model is aimed at more accurately and efficiently representing the real-world scenarios by focusing on relevant
dependencies at a given time rather than considering all possible interactions. This can potentially lead to more efficient learning and better generalization in reinforcement learning environments.
ELDEN's main goal is to identify the dependencies that are locally active among the entities in the environment.
In other words, for a specific state 'st' and action 'at',
ELDEN tries to determine which entities in the environment have an active causal relationship with each other.
This involves constructing a graph where the nodes represent entities, and the edges represent active causal relationships between them.
This graph is specific to the given state and action (st, at), and is called the local causal graph 'Gt'.
**Experiments**
Experiments are conducted with the following baselines: `pCMI (point-wise conditional mutual information)`, `Attn (attention)`: Use the score between each entity pair computed by the attention modules inside the dynamics model to quantify local dependencies. `Vanilla PPO`.
Presented tasks : `Kitchen Task` has a very high variance! ` Thawing Dynamics` curiosoty outperorms ELDEN. `CarWash` results also has high variance.
Strengths: - The ideas presented here are very interesting and novel. Creating an intrinsic reward based on local dependency, which aids exploration in RL, especially in environments with sparse rewards is novel.
Weaknesses: - Weak Experimental results. The results as they stand are not very convincing.
That being said, these are hard problems so I understand the challenges in making these environments to work.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - For the PPO baseline this serves as a baseline with sparse reward? So it is not surprising that this baseline doesnt really work ?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: - Underwhelming experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed reading of our paper and for the very constructive suggestions! We hope our responses adequately address the following concerns regarding the evaluation of our work.
>The results as they stand are not very convincing. That being said, these are hard problems so I understand the challenges in making these environments work.
While we appreciate where the reviewer is coming from, we submit that the experimental results are stronger than they may appear to be on the surface. Specifically,
- High variance is normal: Notice that under our problem setup, the agent gets a reward only if it successfully completes the task. For such tasks, it is very common for successful exploration methods to have high variance. (for example, in Fig 5 (c) of Pathak et al [1]). Intuitively, this is because the intrinsic reward can at its best provide some sort of state-coverage guidance, and the agent often has to rely on luck to obtain its first successful run, which may never happen for some random seeds.
- Extra experiment domains: Also notice that we provide extra experiments on the crafter (Minecraft 2D) domain in appendix Sec G. In this challenging environment, the agent has > 20 objects to interact with, 5 tools that it can craft, and a complex technology tree to follow, where the agent has to efficiently explore through these interactions.
- Additional baseline: Additionally, as part of the rebuttal, we added new experiments that compare against a new baseline, RND, on the crafter domain. Results can be seen in the global response in Fig 1.
[1] Deepak Pathak, Pulkit Agrawal, Alexei A. Efros and Trevor Darrell. Curiosity-driven Exploration by Self-supervised Prediction. In ICML 2017.
> For the PPO baseline this serves as a baseline with sparse reward? So it is not surprising that this baseline doesnt really work?
Yes, as described in Sec 4.2 Vanilla PPO is a baseline that learns from sparse reward only. So as the reviewer points out, it is not expected to work in the challenging settings that we consider.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I acknowledge that I have read the reviews and I have no further questions. | Summary: The paper proposes an intrinsic exploration reward that explicitly takes dependencies between different entities in the environment into account. Such an intrinsic reward helps sparse-reward RL in a few skill-based domains.
Strengths: - I find the idea of modeling dependency graphs between entities interesting and worth exploring.
- The related work section is written especially well, making it very easy to understand how the proposed work is contextualized.
- Explanation of the method is clearly written and easy to understand.
- Results on given domains are promising.
Weaknesses: - The major weakness of the paper is in the evaluation. In particular, the domains that are used for evaluation all use high-level primitives (like goto object and pickup object) which results in relatively few steps in order to accomplish a goal. It's hard to argue that this setting is comparable to the difficulty of something like Franka Kitchen (low-level control in a kitchen environment with sparse reward). And especially given many moving parts are required to make this work, it would limit the applicability in my reading. It's even more unfortunate that this is not justified and only discussed in detail in Appendix C, given that it transforms a long-horizon sparse-reward problem (the typical understanding) into a relatively short-horizon problem.
- With regards to the above, ELDEN is proposed as a standalone method, but in the most complex domain tested (Kitchen), an additional exploration method, PER, is required in order to make it work.
- Too much is pushed to the appendix, for instance architectural details that are important to understand for evaluation (Attn Mask in section 4.1) are only discussed in Figure 4. Ablations for a method with many moving parts only show up in Appendix E.
- All domains are discrete action, and the most convincing experiments (no PER required) are discrete state. The use of partial derivatives through Mixup is a little bit strange and I don't understand where the claim "dynamics models trained on such data... reflect local dependencies" is justified.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - How is the variance of the dependency graph computed (Algorithm 1)?
- Is there a domain with low-level control in which ELDEN can be benchmarked?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Some limitations are addressed in the Conclusion, and more interesting ones are also discussed in Appendix F. In particular Appendix F makes the point that ELDEN is suboptimal in navigation environments, and thus that it is particularly scoped for settings with a big dependency graph.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed reading of our paper and constructive suggestions! We hope our responses adequately address the following concerns regarding the evaluation of our work.
> The experiment domains use high-level primitives which results in relatively few steps in order to accomplish a goal. It's hard to argue that this setting is comparable to the difficulty of something like Franka Kitchen (low-level control in a kitchen environment with sparse reward).
(from the questions section: Is there a domain with low-level control in which ELDEN can be benchmarked?)
We agree with the reviewer that the Franka Kitchen environment is interesting and challenging. However, we believe environments with high-level primitives that focus on a different level of exploration challenges are not inherently less interesting or easier to solve. Specifically,
- As mentioned in the introduction, our method aims to solve tasks with many interaction modes, where the challenge is to **find the correct sequence of interactions among many options** that leads to task success. For example, in our crafter (Minecraft 2D) environment described in appendix G, the agent has > 20 objects to interact with and 5 tools that can craft, and the agent has to efficiently explore through these interactions. As demonstrated by our experiments, even with high-level primitives, all of the baseline methods fail to solve the task (except for thawing), establishing that dealing with many interaction modes is indeed challenging.
- By comparison, the original Franka Kitchen environment [1] has almost no object dependencies, where each object can be toggled independently and the task objective is just to achieve multiple independent goals, one for each object. As such, low-level control in the Franka Kitchen environment is challenging in the aspect of **goal-reaching** (moving the end-effector to the object, similar to navigation) and **precise sensorimotor control** (grasping the object), but not in the aspect of selecting the correct interaction among many options, making it orthogonal to the focus of our method.
That being said, we provide new experimental results examining the performance of ELDEN on low-level, continuous domains in the global response. Specifically, we test replacing the action primitives in our kitchen environment with low-level position control of the gripper.
As shown in Fig 3 in the global response, the problem becomes so hard that none of the methods can make meaningful progress. To the best of our knowledge, most methods that can learn with low-level actions and sparse rewards require some form of human priors (like offline data) [2].
We agree with the reviewer that an exciting future direction of this work would be to combine ELDEN with more fine-grained exploration methods to solve domains that entail both rich interaction and low-level controls.
[1] Gupta, Abhishek, et al. Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. CoRL, 2019.
[2] Alakuijala, Minttu, et al. Learning reward functions for robotic manipulation by observing humans. ICRA 2023
>And especially given many moving parts are required to make this work, it would limit the applicability in my reading. With regards to the above, ELDEN is proposed as a standalone method, but in Kitchen domain, an additional exploration method, PER, is required in order to make it work.
We want to clarify, to generate the intrinsic reward, our method only needs to learn an ensemble of dynamics models.
Please note that prioritized experience replay (PER), or sample prioritization, is applied to **dynamics training** only, rather than an exploration method for policy learning. As the data buffer grows with collected transitions, PER is a common technique to let the dynamics model learn from new data efficiently, and it is widely used in other intrinsic reward methods, like Lobel et al [3].
For all methods, we use PPO to learn policy. PPO is an on-policy algorithm that does not learn from the replay buffer and **PER is not applicable**.
[3] Lobel, Sam, Akhil Bagaria, and George Konidaris. Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement Learning. ICML 2023
>Too much is pushed to the appendix, for instance architectural details that are important to understand for evaluation (Attn Mask in section 4.1) are only discussed in Figure 4. Ablations for a method with many moving parts only show up in Appendix E.
We agree with the reviewer that details and justifications of the environments, as well as some ablation experiments should be moved to the main text. We will prioritize making space for that in the next version of our paper.
However, with regards to Attn Mask, it is a baseline that we describe clearly in the text of Section 4.1. The architecture is not our own contribution, so we consider it appropriate to appear only in the appendix.
>The use of partial derivatives through Mixup is a little bit strange and I don't understand where the claim "dynamics models trained on such data... reflect local dependencies" is justified.
Mixup has been shown to smooth the partial derivatives of the model output w.r.t. inputs (Fig 1 (b) and Fig 2 (b) in [4]), and thus is suitable to improve partial derivative estimation in our method.
For the justification that mixup improves local dependency identification, the results are shown in Table 3 (the “no Mixup” row vs others).
[4] Zhang, Hongyi, et al. "mixup: Beyond empirical risk minimization." arXiv preprint arXiv:1710.09412 (2017).
>How is the variance of the dependency graph computed?
Thanks for pointing this out. Each dependency graph is represented as a binary adjacency matrix. Given a set of graphs, we calculated their per-entry variances and used the mean acorss all entries in the graph as the eventual variance. We will include this specification in the next version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their many clarifications. I have some specific responses below.
- **Low-level control + ELDEN**: I provided Franka Kitchen as an example given that it's close to the authors tested tasks as there is a dependence to the objects in that reward is only obtained when the objects are interacted with in the correct sequence, not independently. Still, I appreciate the additional experiment and clarification. I do think that the current tasks are a bit toy given that they rely on such a structured action space, but I appreciate the paper's perspective and the failure of other exploration methods a bit more.
- **Moving parts**: I had a misunderstanding in my initial read as to how PER is integrated and I thank the authors for clearing this up.
- **Too much in appendix**: Thanks for the clarification on Attn Mask, I agree that it probably makes sense to leave in the appendix.
- **Partial derivatives**: Thanks for the clarification, without familiarity with these precise plots it is difficult to find which prior results the paper refers to, and it'd be nice to have a reference to Table 3 in the appendix here.
- **Variance of graph**: Thanks for this clarification
I think the paper is stronger than I had initially judged, though I still believe that the tasks tested are somewhat toy. In particular, for the simulated Kitchen, which is presented as the most "real" of the tasks, due to its reliance on exact positions of objects and grasping primitives. I do think that I would be happy to update my score to a 5 though.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the thoughtful feedback. We are glad to see that our responses addressed the concerns raised by the reviewer and will make sure to include all suggestions from the reviewer and new analyses in the paper.
We thank the reviewer for agreeing to increase their score to 5. We noticed that at the moment the rating for our paper is still 4, and would like to gently remind the reviewer about this potential oversight. | Summary: This paper proposes a novel intrinsic reward called ELDEN for reinforcement learning. ELDEN encourages the discovery of new local dependencies between entities. Experiments on some robotic tasks are carried out to valid the idea.
Strengths: 1. The idea is novel and interesting.
2. The usage of dynamics models to approximate causal dependency is interesting.
3. The method does not make any assumption of knowing groundtruth dependency.
4. The paper is well-written. The authors discussed their method are different from existing works.
Weaknesses: 1. My concern is about the significance of proposed method. This work focuses on solving semantic-level tasks and assumes knowing underlying objects. Then, why should we use RL in this case? A better option is probably using large language models (LLM) to do the semantic reasoning directly. Another option is to use LLM to design exploration reward like [1]. I would like to see authors opnion on this.
2. In the experiments, two of the considered tasks are not solved (reach normalize score 1.0). The performance is not strong enough.
3. The authors can compare to more baselines. One important baseline can be RND.
[1] Yuqing Du, Olivia Watkins, Zihan Wang, Cedric Colas, Trevor Darrell, Pieter Abbeel, Abhishek Gupta, Jacob Andreas. Guiding Pretraining in Reinforcement Learning with Large Language Models. In ICML 2023.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. What are the state-space and action-space of the used tasks (including the meanings and number of dimensions)? How long is the horizon for each task?
2. How does proposed method perform in image-based tasks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discussed their limitations and potential negative societal impact properly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed reading of our paper and constructive suggestions! We hope our responses adequately address the following concerns regarding the significance and evaluation of our work.
> Weakness 1: compare ELDEN (and RL in general) with LLMs
Thank you for making this point. While we are as excited as everyone else about the possibilities of leveraging LLMs in robotics research, we also think it’s important that the community does not lose interest in research outside the scope of LLMs. While it may indeed be possible to study a similar problem through the lens of LLMs, please note that
**ELDEN does not rely at all on the semantic meaning of state factors and actions**. It only requires that the state space is factored. As pointed out by R1, this is itself an important assumption to examine. But it is a completely separate assumption than those made by LLM-based research.Specifically, ELDEN differs from research using LLMs (including Du et.al 2023 [1]) in at least the following ways:
- LLMs require the semantic meaning of each state factor and action for effective reasoning.
- Using LLMs for reasoning assumes that LLMs know how the environment transitions, but many environments are naturally hard to describe in language and are thus not well learned by LLMs. In contrast, our method (and RL in general) does not assume pre-existing knowledge of the transition probabilities and can learn from the environment interactions.
- Using LLMs constrains the performance to be at the human level, while our method (and RL in general) has the potential to surpass humans and find novel strategies for solving the task, like how AlphaGo finds new Go strategies.
>Weakness 2: two tasks are not solved.
Please note that in those two tasks, our method was able to solve them for 2 out of 3 random seeds. The other seed only achieves some intermediate stages, rendering the average score smaller than 1.0.
For the unsuccessful seed, notice that under our problem setup, the agent gets a reward only if it successfully completes the task. **For such tasks, it is very common for successful exploration methods to have high variance.** (for example, in Fig 5 (c) of Pathak et al [2]). Intuitively, this is because the intrinsic reward can at its best provide some sort of state-coverage guidance, and the agent often has to rely on luck to obtain its first successful run, which naturally introduces lots of stochasticity.
[2] Deepak Pathak, et al. Curiosity-driven Exploration by Self-supervised Prediction. In ICML 2017.
>comparison with RND.
In the global response, we present new experiments comparing against RND on the challenging Minecraft 2D domain, where our method significantly outperforms all baselines including RND.
>Environment specifications (state and action space, their meanings, and horizon)
We list the environment specifications below and will include the extra specifications in the next version of the paper (note that the **MEANINGS** of state variables and actions are not given to our method). Given the space limits, we put the specification of Thawing and CarWash environments in the second part of the global response:
- Kitchen:
- state space (43d), consisting of robot end-effector position (3d) and velocity (3d), robot gripper angles (2d) and angular velocity (2d), butter position (3d) and quaternion (4d), butter melting state (1d), meatball position (3d) and quaternion (4d), meatball cook state (1d), pot position (3d) and quaternion (4d), stove position (3d), target position (3d), stove switch position (3d) and state (1d),
- action space (12d), consisting of
- move the hand to 1) the butter, 2) the meatball, 3) above the pot, 4) the pot handle, 5) the stove, 6) the stove switch.
- grasp 7) the butter, 8) the meatball, 9) the pot handle, if the hand is close to it
- 10) turn on/off the stove switch, if the hand is close to it
- 11) open the gripper
- 12) no action, which is useful for waiting for the meatball to be cooked
- given horizon: 200
- minimal # of time steps required to finish the task: 25
- Minecraft 2D:
- state space (40d), consisting of
- agent position (2d) and direction (1d)
- positions (2d) of the 1 gem block, 5 water blocks, 2 grass blocks, 5 wood blocks, and 1 stone block (28d in total).
- the number of materials (grass, wood, stone, block) and tools (axe stick, wood axe, stone axe, rope, bridge) in the inventory (9d).
- action space (21d):
- move next to each of the environment objects (gem, water, grass, wood, stone blocks, and the workbench), if there is a path (15d in total)
- craft a tool among the choice of (axe stick, wood axe, stone axe, rope, bridge) if the agent has enough materials and is next to the workbench (5d)
collect the material that faced by the agent, applicable only if the agent has the tool necessary for collection (1d)
- given horizon: 200
- minimal # of time steps required to finish the task: 23
In these environments, the exploration challenges are caused by the large action space and the chained dependencies — the agent needs to efficiently explore throughout many interaction modes and find the correct order of interactions that leads to task success.
For all environments, the actions only have effects when their preconditions are met, i.e., in the kitchen environment, grasping the butter will execute only if the robot hand is empty and close to the butter.
>How does the proposed method perform in image-based tasks?
Solving image-based tasks is beyond the scope of this paper. However, a simple potential way to extend our method to image-based tasks is to extract disentangled features from an image (e.g. Schölkopf, Bernhard, et al. [1]) and use the extracted features as the state space. We will add this as an interesting direction for future work.
[1] Schölkopf, Bernhard, et al. "Toward causal representation learning." Proceedings of the IEEE 109.5 (2021): 612-634.
---
Rebuttal Comment 1.1:
Title: Thank you for the response.
Comment: I would like to thank authors for the detailed response. For weakness 1, the authors point out that ``many environments are naturally hard to describe in language and are thus not well learned by LLMs''. I agree with this, but my concern is that if ELDEN is able to solve this kind of problem. For example, it is hard for LLM to solve fine-grained, low-level robotic locomotion or manipulation problems. However, ELDEN could not solve this problem effectively either (e.g. the updated cheetah results).
In conclusion, I would like to know if there is any specific task that ELDEN can solve but LLM and other generic exploration methods fail to solve.
---
Reply to Comment 1.1.1:
Comment: Thank you for your quick response!
Firstly, we want to point out that we do compare against other generic exploration methods in Sec 4 and Appendix G of our paper, where ELDEN consistently outperforms them across all of our evaluation domains. These are clearly tasks that “ELDEN can solve but other generic exploration methods fail to solve”.
With respect to “tasks that ELDEN can solve and LLMs fail to solve”: We want to emphasize that using LLM for decision-making/exploration has a lot more limitations than just the inability to handle low-level controls, including but not limited to that **LLMs require the semantic meaning of actions and state variables, and environment-specific prompt engineering** [1].
- With regard to actions: For example, if the high-level skills in our kitchen environment are learned through data-driven methods such as unsupervised skill discovery methods [2][3], it will be hard to describe such behavior in language, causing LLM-based methods to fail.
- With regard to state variables: For example, Minecraft keeps updating its contents, such as new tools and enemies, and LLMs do not know how those tools can be used nor how new enemies behave.
In both cases, ELDEN can be directly applied to solve the problem, as in our experiments, since it does not require knowing the semantic meaning of these states and actions.
We share your enthusiasm for the possibilities that LLMs represent for the future of machine learning and robotics. However, from our perspective, research on using LLMs to aid exploration is still in its infancy and there are no established benchmarks that we can easily compare against.
We are concerned that exclusively requiring papers to directly compare with LLMs could potentially have a negative impact on the field, especially on tasks where LLMs' effectiveness has not been demonstrated without additional assumptions (e.g., semantic meanings, environment-specific prompt engineering).
If the reviewer has a concrete suggestion for an available system that can accomplish the tasks demonstrated in this paper, we would be delighted to try it out and compare it with our method.
[1] Yuqing Du, Olivia Watkins, Zihan Wang, Cedric Colas, Trevor Darrell, Pieter Abbeel, Abhishek Gupta, Jacob Andreas. Guiding Pretraining in Reinforcement Learning with Large Language Models. In ICML 2023. \
[2] Eysenbach, Benjamin, et al. "Diversity is all you need: Learning skills without a reward function." arXiv preprint arXiv:1802.06070 (2018). \
[3] Groth, Oliver, et al. "Is curiosity all you need? on the utility of emergent behaviors from curious exploration." arXiv preprint arXiv:2109.08603 (2021). | Summary: This paper proposes ELDEN, a method for intrinsic reward based on local dynamics dependencies between factored state variables. It learns an ensemble of factored dynamics models, and uses the magnitude of the partial derivatives to detect local dependencies between state variables, and uses the ensemble variance as an exploratory intrinsic reward. This method is tested across a variety of environments such as gridworlds and robotic control tasks, showing great accuracy in capturing local dependencies and exploratory performance.
Strengths: This paper presents a very clear and thorough test of the main ELDEN method. ELDEN itself is straightforward to understand, and section 4.1 does a convincing job of evaluating how well it detects local dependencies compared to reasonable baselines. Section 4.2 also has reasonable baselines, and clearly demonstrates the effectiveness of ELDEN. The ablations on mixup and the regularization are also well done.
Overall, many interesting questions around ELDEN are covered by the paper, and the results are very clear.
Weaknesses: A common weakness, not specific to ELDEN, is of course the assumption of the factored state space. While this can be overcome with object-identifying representations, there is still the question of how effective ELDEN may be when paired with learned factored representations as opposed to ground truth. Hopefully future work can look in this direction.
While it is mentioned that ELDEN is not meant to excel in all domains, an additional experiment in a more common domain such as the sparse reward tasks of DM Control Suite would be enlightening to see. Perhaps ELDEN would also work well, or it may not compared with the baselines; either way it would provide some additional insight into the limitations of ELDEN.
---- After Author Rebuttals ----
After reading other reviews and author rebuttals, I maintain my score. I think this paper has clearly shown when and where ELDEN can excel, and the limitations when ELDEN is not very effective.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: It is mentioned that ELDEN can help with more complex, chained dependencies, however the basic method is still only looking at 1-step local dependencies. Why would one expect ELDEN to do better than other 1-step methods?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Are addressed appropriately in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed reading of our paper and constructive suggestions! We hope our responses adequately address the following questions about our work.
> A common weakness, not specific to ELDEN, is of course the assumption of the factored state space. Hopefully future work can look in this direction.
We agree that testing the effectiveness of ELDEN when the factored representations are learned would be interesting next steps.
> While it is mentioned that ELDEN is not meant to excel in all domains, an additional experiment in a more common domain such as the sparse reward tasks of DM Control Suite would be enlightening to see. Perhaps ELDEN would also work well, or it may not compared with the baselines; either way it would provide some additional insight into the limitations of ELDEN.
In the global response, we add the results of the DMC Cheetah domain with sparse rewards in Fig 2. As expected by the reviewer, ELDEN and empowerment method (CAI) performs worse than the curiosity-based method. We analyze the reasons as follows:
- With regard to ELDEN, DMC with sparse rewards is challenging in the aspect of precise low-level sensorimotor control, which is orthogonal to the focus of ELDEN on selecting the correct interaction among many options. Hence, it is not surprising that ELDEN does not help with the exploration in the cheetah domain.
- With regard to CAI: the cheetah task requires precise control of each joint. In contrast, CAI is motivated to maximize its control over all joints (rotating them between the joint limits), and thus it fails to learn the tasks and harms the exploration.
> It is mentioned that ELDEN can help with more complex, chained dependencies, however, the basic method is still only looking at 1-step local dependencies. Why would one expect ELDEN to do better than other 1-step methods?
Take the Minecraft 2D environment in Appendix G as an example, where it has the chained dependencies of “collect wood” $\rightarrow$ “craft wood axe” $\rightarrow$ “collect stone” $\rightarrow$ “craft stone axe” $\rightarrow$ “collect gem”. As shown in Appendix G, our method solves this problem better compared to baselines (dynamics curiosity, dynamics uncertainty, CAI, and RND) because:
- Our method prioritizes novel 1-step interactions (local dependencies). In the initial learning stage, compared to baselines that are preoccupied with repetitive movements for seeking novel state values (e.g., new agent positions after movements), our approach prioritizes novel interactions (e.g., collecting wood, grass, or other objects). This proactive exploration strategy increases the agent's chances of successfully collecting wood, which, in turn, **opens up new possibilities for further interactions**, such as crafting the wood axe.
- Though the reward is based on 1-step local dependencies, as RL optimizes the return, the agent will seek both current and future novel dependencies. For example, as the agent undergoes training, it may become familiar with former interactions in the chain (i.e., up to “collect stone”). However, it remains motivated to tackle the latter, less familiar dependencies that follow the “craft stone axe”. Hence, even if the former dependencies do not give any 1-step reward, the agent will still finish them to continue exploring the latter dependencies and maximize its return.
We will include this discussion in the next version of the paper.
---
Rebuttal Comment 1.1:
Title: Clarifying the 1-step vs chained dependencies
Comment: Thank you for addressing my questions!
I want to clarify what I meant by asking "Why would one expect ELDEN to do better than other 1-step methods?". This was in part motivated by the following statement about empowerment in Section 2.2: "However, due to the difficulty in measuring the mutual information across a multi-step trajectory, existing empowerment-based methods only measure 1-step empowerment, ...", but also about the repeated emphasis on chained dependencies. The two mechanisms you mentioned in your response are general properties of RL, which would equally apply to other methods such as 1-step empowerment or 1-step curiosity - the RL itself will learn to visit parts of the state space with large aggregate novelty. The fact that "chained dependencies" is mentioned so many times in the paper could give the wrong impression that ELDEN is trying to learn a much more complicated dependency graph, whereas it is actually a 1-step method just like many other methods. So I think it could be useful to just add a bit of clarity that ELDEN is still a 1-step model, but is able to handle chained dependencies because of RL.
---
Reply to Comment 1.1.1:
Title: Thanks for your suggestions on improving clarity!
Comment: Thank you for your quick response! We agree that the part about "chained dependencies" should be clearer, and we will add the suggested clarification in the paper. | Rebuttal 1:
Rebuttal: We thank all reviewers for the detailed reading of our paper and constructive suggestions! In the global response, we would like to describe the setup of additional experiments and the results can be found in the attached pdf.
- Comparison with RND: Using the challenging crafter (Minecraft 2D) domain described in Appendix G (featuring > 20 objects for the agent to interact with and 5 tools to craft, with a complex technology tree), we compare our method against RND. As shown in Fig 1, our method ELDEN significantly outperforms all baselines, which again demonstrates the strength of ELDEN in solving tasks with many interaction modes and complex preconditions.
- Results on a deepmind control suite (DMC) domain, asked by R1 kab1: In Fig 2, we show the performance in the Cheetah environment with sparse reward. Following the "no free lunch theorem", no intrinsic reward method can perform best in all environments. As discussed in Sec 5 (and above), our method ELDEN aims to solve tasks with many interaction modes, where the challenge is to find the correct interaction leading to the task success. However, DMC with sparse rewards is challenging in the aspect of precise low-level sensorimotor control. Hence, unsurprisingly, ELDEN performs worse than curiosity-based methods.
- Results on using low-level actions in the Kitchen domain, asked by R3 G2vP: In Fig 3, in the kitchen environment, we use low-level actions consisting of end-effector x, y, z movements (in the range of [-5, 5] cm) and gripper control (in the range of [-1, 1], where > 0 for closing the gripper and < 0 for opening), which is one of the low-level action formats provided in the robosuite [1]. Similar to DMC domains, using low-level actions is challenging in the aspect of goal-reaching (moving the end-effector to the object, similar to navigation) and precise sensorimotor control (grasping the object). To the best of our knowledge, in such manipulation domains, most methods that can learn from sparse rewards with low-level actions require some form of human priors (like offline data) [2][3]. As a result, none of the methods make meaningful progress.
With regard to DMC domains and manipulation domains with low-level actions, we agree with the reviewer that an exciting future direction of this work would be to combine ELDEN with more fine-grained exploration methods to solve such domains that entail both rich interaction and low-level controls.
[1] Zhu, Yuke, et al. "robosuite: A modular simulation framework and benchmark for robot learning." arXiv 2020.
[2] Gupta, Abhishek, et al. Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. In CoRL, 2019.
[3] Alakuijala, Minttu, et al. Learning reward functions for robotic manipulation by observing humans. ICRA 2023
### In the remaining global response, we would like to continue the descriptions of Thawing and CarWash environments, asked by R2 FZmh.
> Environment specifications (state and action space, their meanings, and horizon)
- Thawing:
- state space (12d), consisting of agent position (2d), agent direction (1d), fish position (2d), fish frozen state (1d), sink position (2d), refrigerator position (2d), refrigerator state (1d), timestamp (1d)
- action space (7d), consisting of
- move towards 1) the refrigerator, 2) the sink, 3) the fish,
- 4) open, or 5) close the refrigerator,
- 6) pickup, or 7) drop the fish
- given horizon: 100
- minimal # of time steps required to finish the task: 11
- CarWash:
- state space (18d), consisting of agent position (2d), agent direction (1d), rag position (2d), rag state (2d), sink position (2d), sink state (1d), bucket position (2d), soap state (2d), car position (2d), car state (1d), timestamp (1d).
- action space (10d), consisting of
- move towards 1) the rag, 2) the sink, 3) the soap, 4) the bucket, 5) the car
- pick up 6) the rag, 7) the soap
- drop 8) the rag, 9) the soap
- toggle the sink 10)
- given horizon: 300
- minimal # of time steps required to finish the task: 31
Pdf: /pdf/d309a2463ec1c64533b8ce853f241f72b5408d45.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Cross-links Matter for Link Prediction: Rethinking the Debiased GNN from a Data Perspective | Accept (poster) | Summary: This paper addresses the bias between internal links and cross-links by augmenting cross-links and combining two models consisting of the original and debiased models. Specifically, the authors show that the number of cross-links is fewer than internal links in three real-world datasets. Thus, with Jaccard coefficient score, they augment cross-links and train a debiased model on the augmented graphs. To resolve the trade-off between utility and fairness, they fuse the representation of the original model (which is trained on the original graph), and the debiased model. In experiments, they show that the proposed method resolves the bias and even improves the overall performance.
Strengths: - In contrast with the existing methods, the authors mainly target the bias based on graph topology.
- The suggested method mitigates the bias without compromising performance on several datasets under various architectures.
Weaknesses: - It seems insufficient to demonstrate that internal-links are more common than cross-links based solely on three datasets. For heterophilous graphs, cross-links would be more prevalent when each node label is regarded as a community. In this case, it is more reasonable to augment internal-links, but the suggested method is not able to do it.
- The rationale for supervision augmentation, which is the core component to resolve the bias, seems weak. Why is the Jaccard coefficient score better than other options? Due to its formulation, it would be difficult to augment the connections between a node with a high degree and a node with a low degree. However, using an edge predictor does not have this limitation and is a more simple approach. (Specifically, train an edge predictor and predict edges with this predictor. Then, choose edges based on the confidence of the predictor.)
- The performance of models without fusion is inferior to base models in Table 3. Since supervision augmentation is the core component to mitigate the bias, this component is more important than other components, but it is weak. Based on this observation, it is probable that using UGE as a debiased model would show better performance than the current approach. (Slight tuning is needed to combine UGE.) Then, the contribution of this paper would be marginal.
- Comparing baselines only under GraphSAGE seems insufficient. Could you compare the proposed method with UGE under LightGCN and GAT, too?
- The datasets used in this paper are quite different from the baselines such as FairAdj and UGE. UGE uses Pokec-z, Pokec-n, and MovieLens-1M, while FairAdj utilizes Oklahoma97, UNC28, Facebook#1684, Cora, Citeseer, and Pubmed. Are there any reasons to evaluate the methods on different datasets? According to FairAdj, Cora, Citeseer, and Pubmed have more internal-links than cross-links. Thus, the performance superiority on these graphs can further support the effectiveness of the proposed method.
- The connections between sentences in the abstract seem unnatural.
Minor issues
- In line 24, “research concerns” seems unnatural. I recommend “research interests”.
- In line 394, “Epilogue” seems non-academic expression. I recommend “Conclusion”.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - In the ablation study, I’m interested in the experimental details for “-Augment”. I understand it as this policy randomly chooses connections among possible cross-links (Not including internal-links). If not, it would be more persuasive to show the comparison with the policy I described.
- In Table 4, it would be more persuasive to provide the performance of internal-links and cross-links, as shown in Table 2, and 3.
- It is difficult to understand Figure 2. Could you provide a more detailed description of Figure 2?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors provide several limitations in the main paper. However, as I mentioned in Weakness, it would be better additionally to address other issues such as the applicability to heterophilous graphs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### [About the concerns on heterophilous graphs (W1)]
1. **Additional statistics on multiple heterophilous graphs are provided.** To validate the data bias on heterophilous graphs, we conduct community detection with Louvain algorithm on six datasets that have low homogeneity ratios (Hom. Ratio). Specifically, we follow the definition of homogeneity ratio in [1]: $Hom. Ratio=\frac{\sum_{\langle u, v\rangle\in|\mathcal{E}|}\mathbb{1}(y_u=y_v)}{|\mathcal{E}|}$.
where $\mathcal{E}$ denotes the edge set, $y_u$ represents the label of node $u$ and $\mathbb{1}(\cdot)$ is an indicator function. The numbers of internal-links and cross-links are illustrated in Table R1 within the uploaded PDF. It can be seen that, even in datasets with low homogeneity ratios, cross-links still significantly fall short of internal-links. **This observation further supports that the data bias between internal-links and cross-links is widely existed in the real world, even in heterophilous graphs.**
2. **Our model's flexibility to augment internal-links.** In fact, the fundamental idea of our approach is to focus on the minority group in the data and then find out high-confidence, unobserved samples for supervision augmentation. **In this way, even if in a context where the number of cross-links exceeds that of internal-links, we can also handle this situation by simply reversing the criteria within the supervision augmentation**, thus augmenting the corresponding minority.
[1] Du et al. GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily. WWW 2022.
------
### [About the concerns on supervision augmentation (W2)]
Since we need to train a separate edge predictor for supervision augmentation, we believe that this kind of approach may potentially inherit the data bias between cross-links and internal-links, thereby limiting its effectiveness in mining unobserved cross-links. In contrast, the heuristic methods in our paper require no training process, inherently sidestepping the risk of introducing bias from data. What's more, the experimental results also prove that the methods based on Jaccard coefficient and random walk are simple yet effective.
------
### [About the concerns on ablation study and baseline UGE (W3, W4)]
1. It seems that there might have been a slight misunderstanding regarding the Ablation study. As a core component in our model, the embedding fusion module is designed to assist GNNs in mitigating the impact of noise introduced by data augmentation. While standalone data augmentation can effectively aid in model debiasing, it may affect the overall performance of the model. In the ablation study, we remove the embedding fusion module and reserve the supervision augmentation, which is denoted as "-fusion". As a result, in Table 3, **"-fusion" achieves the best debias performance -- the lowest Bias**, **while the overall performance is affected.**
2. **Additional comparisons based on other base GNNs are provided.** As for the comparisons of UGE, we have further implemented UGE on GAT and LightGCN. The results are displayed in Table R4 within the uploaded PDF. As observed, due to the modifications in the objective functions, although UGE can achieve the best debias performance in certain instances, **it is unable to prevent noticeable utility degradation** compared to our methods. **This kind of debias achieved with sacrificing utility is not the admirable goal of our work.**
------
### [About the concerns of datasets (W5)]
We would like to address the reviewer's concerns related to the datasets from the following two aspects:
1. **The reasons for choosing datasets.** In fact, **it seems that a universally acknowledged benchmark has not been established yet in the prior work**. To be precise, we statistic the datasets in the prior works in Table R2 within the uploaded PDF, and it can be seen that different methods actually utilize different datasets. Therefore, given the graph scale in the real world, we choose three relatively large datasets from SNAP and RecBole.
2. **Additional comparisons on the official datasets are provided.** For validating our method's effectiveness, we further conduct experiments and compare the performance of our method with baselines on their official datasets. The results are shown in Table R9 in the uploaded PDF. From the table we can observe that, on the official datasets, **our proposed method still consistently outperforms UGE and FairAdj on both debias and utility**, which further verifies our method's superiority.
------
### [About the concerns on presentation and clarification (W6, Q1, Q2, Q3)]
1. **The details of ablation study "-Augment".** In our "-Augment" setting, we randomly select an equal number of cross-links, without including internal-links in these randomly chosen edges. We will enhance the corresponding descriptions in the final version.
2. We will consider elaborating on the results of Table 4 in the final version (one additional page is permitted for the accepted paper).
3. **Additional descriptions about Figure 2.**
We would like to help you have a comprehensive understanding of Figure 2 by describing the following three aspects:
- **Experimental settings:** In Figure 2, we initially apply the Louvain algorithm to cluster the item-item graph in the LastFM dataset. Subsequently, for conciseness, we randomly sample 10 clusters and statistic the number of the top 10 commonest labels across the entire graph.
- **Observations:** These results are visualized as a heatmap in Figure 2. It can be seen that each community contains its own specific information pattern, i.e. specific label distributions.
- **Conclusions:** It's impossible for one single community to encompass all diverse information in a network. This implies the propensity of graphs to form information cocoons with insufficient cross-links as bridges, which further verifies the significance of cross-links.
---
Rebuttal Comment 1.1:
Title: Change my score to weak accept
Comment:
I appreciate the detailed response. Most of my concerns have been addressed. Thus, I change my score to weak accept.
---
Reply to Comment 1.1.1:
Title: Thank you for the comments
Comment: Thank you again for your valuable review and reconsideration of scores! | Summary: This work finds that current GNN methods have severe data bias because GNNs like to connect new links inside the local neighbors and ignore the distant ones. To address this problem, the authors investigate the bias across different communities and propose a general framework. In this framework, the authors devise three key components, including supervision augmentation, twins-GNN, and embedding fusion module. To display the effectiveness of either component, the authors perform the ablation study, and with all of them, the Debias model has a promotion on three datasets compared with six GNN backbones.
Strengths: + The idea of rethinking the data bias, especially the fact that existing GNN models tend to connect local neighbors, is novel and has basic value.
+ The writing is clear and easy to follow.
+ The framework has a strong generality that can be applied to most GNNs (six examples used in this paper).
+ The framework is an end-to-end framework and is easy to accomplish.
Weaknesses: - The clusters should be pre-computed by some community detection algorithms (Louvain algorithm in this main content and METIS algorithm in the appendix); however, the impact of the quality of the community detection is unknown, and the results vary greatly under different community detection algorithms.
- The promotion of link prediction on DBLP and LastFM seems not to be apparent, especially the bias is still very high. (This phenomenon is referred to by the authors in the limitation part.)
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: + This paper uses Jaccard-based augmentation for Epinions and DBLP, and random walk based augmentation for LastfFM because of LastfFM’s high density. Moreover, the authors detail the analysis of two augmentations, but how can we decide which is better before the link prediction experiments?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes. The authors acknowledge that the data bias might not be the mere reason, and the supervision augmentation lacks theoretical analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely grateful to the reviewer for the careful evaluation and insightful comments. We would like to address the concerns raised in the feedback in the following responses.
------
### [About the impact of community detection algorithms (W1)]
By comparing the experimental resulst based on Louvain algorithm (Table 2) and Metis algorithm (Table C2 in Appendix D.2), it can be observed that different community detection methods have limited influence on the overall performance of GNN, primarily affecting the debiasing effectiveness. This is mainly due to the sample variations in cross-links and internal-links obtained from different community detection methods. **However, our method consistently demonstrates a certain degree of debiasing capability across different community detection algorithms.** In practical application scenarios, users can define cross-links and internal-links based on their specific requirements, and subsequently employ our approach for training purposes.
------
### [About the choice of supervision augmentation (Q1)]
In particular, the choice of supervision augmentation method should primarily be determined by the characteristics of the dataset, such as its density. Additionally, as we highlight when introducing random walk based augmentation, when the communities within the graph are quite extensive, relying solely on Jaccard-based augmentation may inadvertently focus mostly on nodes at the boundaries of communities. Consequently, in such cases, a more effective supervision augmentation strategy would be random walk based methods.
Overall, in this paper, **we have presented two simple yet effective supervision augmentation methods**, both centered around the concept of identifying highly potential cross-links within the graph that have not been observed yet, and we acknowledge the possibility of exploring alternative and potentially more efficient methods in our future work.
------
### [About the limited debias performance (W2)]
As we mentioned in the limitation, we observe that the bias between cross-links and internal-links is only mitigated in a limited degree. In this way, we believe that the data bias between internal-links and cross-links may not be the only reason for the poor performance on cross-links, and the performance bias may come from multiple causes such as inbalanced data distributions, GNNs' biased aggregation operations and so on. We sincerely hope that our work can provide a new debias direction for other researchers and we leave the unexplored perspectives as our future work.
------
We sincerely appreciate the efforts and valuable feedback provided by the reviewer. We genuinely hope that our responses have addressed your concerns and contributed to a better understanding of our research. If you have any further questions or confusion, please do not hesitate to reach out to us. We would be more than willing to assist and provide further clarification.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for the authors' effort in the response. The authors addressed most of my concerns. I think this work basically has value, but it still needs some further analysis and theoretical results to fulfill the contributions on some of the inherent weaknesses if given a higher score. So, I keep the current score. | Summary: The authors aim to explore the issue of bias in the link prediction task for GNNs. Specifically, they develop methods to mitigate the bias resulting from graph topology - on internal links versus cross-community links. Their work relies on debiasing node embeddings and a fusion component that retains aspects of both the original and the debiased node embeddings. The main goal is to ensure that the implicit creation of information silos does not degrade link prediction performance. The overall architecture borrow from retrieval model literature and is designed to be partially agnostic to model choice.
Strengths: 1. The problem is significant enough in that enough research has been devoted to it in earlier literature, and that topology-induced bias is an interesting direction to consider.
2. The authors experiment on a number of baselines to show the relative superiority of their method. They also include a number of ablation studies to show the effectiveness of their architecture.
3. The architecture is model agnostic and allows for plugging in more powerful GNN models, for example.
4. The loss function of the link prediction objective does not have to be modified (supported by a regularizer) and so the loss surface is not directly affected. Instead, supervised augmentation provides a kind of regularizing effect.
Weaknesses: 1. The main weakness of the paper is the small variety of datasets that the experiments have been run on. Ideally there should be multiple kinds of graphs, varying by size (nodes, edges), or even types of communities, their strength or internal cohesiveness and the degree to which they overlap. In comparison, the number of baselines is acceptable. The authors could add more graphs, for e.g. from the SNAP repository and experiment on more graph parameters like the above. Further, synthetic datasets generated by a particular model could help serve as a baseline and also possibly study the evolution of such cross-community bias in social networks.
3. While community detection is done mainly via the Louvain algorithm, one could consider clustering based on node features and other methods such as stochastic block model as another baseline.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors address the fact that bias is not entirely eliminated by their procedure and that theoretical support does not yet exist for their contribution. The work has a positive societal impact as the key goal is to reduce bias.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Sincere thanks for the reviewer's thorough evaluation and constructive comments. With respect to the reviewer's insightful feedback, we have organized our rebuttal as follows:
------
### [About the datasets (W1)]
We would like to address the concerns to the datasets utilized in our paper from the following two aspects:
1. **The reasons for choosing social networks and recommendation networks.** As we analyzed in Appendix B.1, cross-links are highly essential due to their significance in easing information cocoons such as filter bubbles and echo chambers. These information cocoons widely exist among social networks and recommendation scenarios. In this way, we only utilized these two types of datasets in our work.
2. **Additional experiments on two real-world datasets are provided.** In fact, our approach can potentially be applied to other types and sizes of datasets as well. In particular, we further investigate the performance of our method on the **Amazon dataset** (from SNAP) and the **Cora dataset**. **The corresponding experimental results based on two kinds of supervision augmentation methods are given in Table R5-R6 in the uploaded PDF.** So far, the datasets we have utilized are summarized in the table below:
| | Users | Items | Interactions | Type | Scenario |
| -------- | ------- | ------- | ------------ | --------- | ---------------------- |
| Epinions | 75,879 | - | 508,837 | User-User | Social Network |
| DBLP | 317,080 | - | 1,049,866 | User-User | Co-author Network |
| Cora | - | 2,078 | 5,278 | Item-Item | Citation Network |
| Amazon | - | 334,863 | 925,872 | Item-Item | Co-purchase Network |
| LastFM | 1,892 | 17,632 | 92,834 | User-Item | Recommendation Network |
It can be observed that, we conduct experiments on five different kinds of datasets, with graph sizes ranging from 2.1k to 334.8k. The experimental results on all these datasets consistently verify the effectiveness of our model.
3. **Additional experiments on a synthetic dataset are provided.** We additionally employ the *Stochastic Block Model* (SBM) to generate a synthetic dataset. In particular, the synthetic dataset comprises 4000 nodes and 56128 edges, and we perform experiments using two powerful GNN models: GraphSAGE and GAT. The corresponding results are displayed in Table R7 within the uploaded PDF. **Notably, our approach also consistently outperforms the baseline models in terms of both utility and debias on the synthetic dataset.**
------
### [About the community detection algorithms (W2)]
We agree with the reviewer that the community detection methods are important for our work, and we would like to address the reviewer's concerns from the following two perspectives:
1. **The reasons for using Louvain algorithms and Metis algorithms.** As we highlight in the introduction, our study primarily concentrates on the link prediction bias based on the graph topology. Consequently, we do not use community detection methods based on node features in our experiments and use two kinds of community detection methods based on graph topology namely Louvain and Metis (mentioned in Appendix D.2).
2. **Additional investigations on other community detection algorithms are provided.** To further illustrate the robustness of our approach to community detection methods, in addition to the Metis algorithm mentioned in Appendix D.2, we implement the LPA algorithm for community detection. **The experimental results on LPA are presented in Table R8 in the uploaded PDF**. It can be seen that, although we deploy different community detection method, our method still demonstrate strong capability in easing the bias between cross-links and internal-links with even improved link prediction utility.
------
We sincerely appreciate the efforts and valuable feedback provided by the reviewer. We genuinely hope that our responses have addressed your concerns and contributed to a better understanding of our research. If you have any further questions or confusion, please do not hesitate to reach out to us. We would be more than willing to assist and provide further clarification.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks to the authors for the response. I would like to point out that graph structured can be used in SBM style algorithm, e.g., https://www.cs.utexas.edu/users/inderjit/public_papers/kdd_cocluster.pdf. One can run this co-clustering algorithm on the adjancency matrix. Also, one can use node embeddings as node features for co-clustering. I was interested to see how sensitive the method on different choices of community detection algorithms.
Summarizing, I think that the paper could include more analysis. I will keep my score as it is. | Summary: The paper introduces a twin-structure framework for mitigating bias in link prediction methods based on Graph Neural Networks. Current link prediction approaches often prioritize performance without considering biases on sensitive attributes of nodes, leading to social risks and information cocoons. The proposed framework divides the graph into communities, distinguishing internal-links from cross-links, and employs supervision augmentation to increase signals for cross-links, generating debiased node embeddings. An embedding fusion module preserves the performance of internal-links while alleviating bias between them and cross-links. Experimental results on real-world datasets demonstrate the framework's effectiveness in reducing bias and improving overall link prediction performance compared to state-of-the-art baselines.
Strengths: * The framework author propose can achieve good improvement on a lot of GNNs.
* The logic of this paper is easy to understand.
* The paper conduct experiments on large datasets.
Weaknesses: * The paper doesn't mention subgraph-based GNN for link prediction.
* The algorithm is not so clear about how to inference.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: * The paper doesn't mention subgraph-based GNN for link prediction. One reason may be those GNNs are not applicable for large graphs and not fast enough for recommendation systems. But I'm just curious does the cross-link matter for those kind of GNNs for link prediction?
* In equation 4 and 7, there seems to be only one negative edge for each positive edge. Why not use some K as hyperparameter? If more than one negative edge per positive one, how will the equations become?
* How to use this framework in the inference step? Do you use augmentation, or just use $Z^O$?
* In Table 3, model without fusion can get best cross-link performance, does that mean cross-links don't matter too much?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive and meticulous comments! We have carefully examined the mentioned issues and have prepared the following rebuttal to address these concerns.
------
### [About subgraph-based GNNs (W1, Q1)]
1. **The reasons for not using subgraph-based GNNs.** Subgraph-based GNNs are usually time-consuming due to excessive subgraph extraction operations, and this issue might be severe when they face large graphs. In light of this, we mainly choose several efficient and effective GNNs as base models. However, we will make sure that more subgraph-based GNNs will be discussed in the final version.
2. **The experiments of classical subgraph-based GNN are provided.** To validate if cross-links also matter for subgraph-based GNNs, we opt for a highly classic subgraph-based GNN link prediction model, SEAL [1], for experiments. Specifically, we conduct experiments on the Epinions and DBLP datasets and employed the Louvain algorithm for community detection. The experimental results are presented in Table R3 in the uploaded PDF. It can be seen that, as a classic subgraph-based GNN link prediction model, SEAL also faces significant performance bias between cross-links and internal-links. This observation further verifies the pervasive existence of cross-link bias issues.
[1] Zhang et al. Link Prediction Based on Graph Neural Networks. NeurIPS 2018: 5171-5181.
------
### [About the details of inference (W2, Q3)]
Thanks for pointing out this issue. During the Inference stage, we use the node embedding $Z$ for evaluation, which is the output of the embedding fusion module. For better understanding and eliminating potential confusion, **we provide the inference details in Algorithm 1 in the uploaded PDF**.
------
### [About the loss functions (Q2)]
In this work, we have followed the literature [2] and [3] and **employed one of the most classic loss functions in recommendation systems - BPRLoss** as our loss function. Generally, in BPRLoss, there is only one negative sample for each positive sample. However, we concur with the reviewer that incorporating multiple negative samples may positively contribute to improving link prediction performance. Since the loss function is not the primary focus of our research, we could design other alternative loss functions to handle the situation of multiple negative samples, and we provide two kinds of formulations below:
- $\mathcal{L}=-\sum_{\langle u, v\rangle \in \mathcal{E}^O}\enspace\sum_{\langle u, \hat{v}\rangle \notin \mathcal{E}^O}\enspace\log(\sigma(r_{u,v}-r_{u, \hat{v}}))$ , where $\sigma$ is an activation function like sigmoid.
- $\mathcal{L} = - \sum_{\langle u,v\rangle \in \mathcal{E}^O} \enspace \log \frac{\exp(r_{u,v} / \tau)}{\sum_{\langle u, \hat{v}\rangle \notin \mathcal{E}^O} \enspace \exp(r_{u, \hat{v}} / \tau)}$, where $\tau$ is the temperature parameter.
Both two loss functions aim at maximizing the prediction scores between positive node pairs while minimizing that between negative node pairs. **Note that, all these mentioned loss functions can be easily deployed to our method, which is determined by the practitioners according to their practical applications**. We will make this claim more clear in the final version.
[2] He et al. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. SIGIR 2020: 639-648.
[3] Wu et al. Self-supervised Graph Learning for Recommendation. SIGIR 2021: 726-735.
------
### [About the concerns on ablation study (Q4)]
It seems that there are some misunderstandings about the "-fusion" in the ablation study and cross-links, and we would like to help you have a better comprehension from the following two key points:
1. **The role of embedding fusion.** Embedding fusion module is designed to assist GNNs in mitigating the impact of noise introduced by data augmentation. While standalone supervision augmentation can effectively aid in model debiasing, it may affect the overall performance of the model. In the ablation study, we remove the embedding fusion module and **reserve the supervision augmentation, which is denoted as "-fusion"**. As a result, in Table 3, **"-fusion" achieves the best debias performance -- the lowest Bias**, **while the overall performance is affected.**
2. **The significance of cross-links.** As demonstrated in Figure 1, the proportion of cross-links in our utilized dataset is relatively small; however, they actually play a significant role in eliminating information cocoons and ensuring graph connectivity as we analyzed in Appendix B.
------
We sincerely appreciate the reviewer's dedication and insightful comments. We hope that our responses have effectively tackled your concerns and enhanced your comprehension of our study. If you have additional questions or confusion, please feel free to contact us without hesitation. We are more than willing to be engaged in a new discussion and offer assistance.
---
Rebuttal Comment 1.1:
Title: Change my score to weak accept
Comment: I appreciate the detailed response. Most of my concerns have been addressed. For the discussion of subgraph-based models, I encourage the authors to include some more recent models which are more efficient (like SUREL+) in the final version. In the end, I change my score to weak accept.
---
Reply to Comment 1.1.1:
Title: Thanks for your comment
Comment: Thanks for your valuable feedback and reconsideration of scores! We will take your advice and include more recent subgraph-based work in the final version. | Rebuttal 1:
Rebuttal: ## Response for All Reviewers
We sincerely appreciate the dedication and effort put forth by all the reviewers. We hope that our responses have effectively addressed your concerns and contributed to a deeper understanding of our research. **Due to space limitations, we have included most of the experimental results (figures and tables) in the attached PDF, with corresponding references provided in the rebuttal.** If you have additional questions or confusion, please feel free to contact us without hesitation. We would sincerely like to provide more information and clarification if necessary.
Pdf: /pdf/7e99f0f53bd6b2330969c4fdfbe8daba7cc3635e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the issue of bias in GNN link prediction and proposes a twin-structure framework to mitigate the bias and improve performance. The framework includes an embedding fusion module and a debias module, which work together to reduce the bias between cross-links and internal-links without hurting overall performance. Experiments on three datasets with six different GNNs show that the proposed framework can both alleviate the bias and boost the overall GNN performance.
Strengths: - The paper addresses an important issue of bias in GNN-based link prediction and proposes a novel framework to mitigate the bias.
- The experiment results show that the proposed framework almost always provides both debias and performance gain, even on different GNNs.
- The twin-structure and embedding fusion is simple and clear.
- Limitations are discussed and code is provided for reproducibility
Weaknesses: - Paper presentation can be improved. For example, Figure 5 is too small, numbers are hard to read and colors are hard to distinguish. Also, if space permits, I feel like moving Algorithm 1 in Appendix to the main body would be better.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Any efficiency analysis/convergence analysis/time complexity analysis for better understanding the complexity of the proposed framework compared to standard supervised training.
2. For the mentioned limitation of not having a theoretical analysis. Any ideas about how to approach it? Are there any potential connections to any existing GNN analysis framework?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Two limitations are mentioned. 1. The bias is not completely eliminated. 2. Lacking theoretical understanding. I think pointing out these limitations is a plus and both limitations can trigger meaningful future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for these insightful and enlightening comments. Specifically, we aims to address the concerns of the reviewer with the following responses.
------
### [About the presentation (W1)]
According to the official guidelines of NeurIPS 2023, accepted papers are allowed to add one extra page for the camera-ready version. In this way, we would increase the figure size and font size in Figure 5, and relocate Algorithm 1 from the Appendix to the main text if our work is accepted at last.
------
### [About the efficiency of our model (Q1)]
Thanks for your comment, and we would like to analyze the efficiency of our model through the following two perspectives:
1. **Time complexity analysis is provided.** Before the time complexity analysis, we would like to give some notations first:
| Notations | Descriptions |
| ------------------ | ------------------------------------------------------- |
| $\mathcal{E}_{in}$ | The set of internal-links in the graph. |
| $\mathcal{E}_{cr}$ | The set of cross-links in the graph. |
| $k$ | The augmentation ratio. |
| $S_t$ | The training step for training embedding fusion module. |
As a model-agnostic framework, we assume a specific GNN model requires time *T* to finish the training on a single sample. Then, the time complexity of the base models can be denoted as:
$T _{base}=(|\mathcal{E} _{in}|+|\mathcal{E} _{cr}|)*T$
while the time complexity of our method is:
$\begin{align}T _{our} &=(S _t+2)*(|\mathcal{E} _{in}|+|\mathcal{E} _{cr}|)*T+k(|\mathcal{E} _{in}|-|\mathcal{E} _{cr}|)*T \\\ &<(S _t+2) * (|\mathcal{E} _{in}| + |\mathcal{E} _{cr}|)*T + k(|\mathcal{E} _{in}|+|\mathcal{E} _{cr}|)*T \\\ &=(S _t + 2 + k)*T _{base} \end{align}$
where $k$ is the augmentation ratio, which is fixed to 1 or 1.25 in advance. It can be seen that, for each epoch, the upper bound of our algorithm's time complexity is highly dependent on $S_t$. **Fortunately, in our experiments, we deploy a dynamic training strategy, which would set $S_t$ to a relatively small value to avoid unnecessary time expenses at the beginning of training.** To be concrete, for each dataset, the value of $S_t$ starts at 1 at the initial epochs and slowly increases to 20 as the training progresses.
2. **Convergence analysis is provided.** We further provide the convergence analysis of our models and two base models on Epinions dataset. The results on GAT and LightGCN are illustrated in Figure R1 in the uploaded PDF. From the observation we can find that, **despite the additional time our model requires to achieve convergence, the results demonstrate that we are able to attain comparable performance to the base models within a similar timeframe.** We believe that the main reason for this observation is our dynamic training strategy, which sets a relatively small $S_t$ at the beginning of training, thus greatly reducing the time expenses.
------
### [About the theoretical analysis (Q2)]
We believe this is a very inspiring question and we list some existing techniques that may have potential connections with our work.
1. **Counterfactual learning.** There might be some potential connections between our work and counterfactual learning [1]. In our settings, supervision augmentation will introduce plenty of unobserved/counterfactual samples for GNN learning. To be specific, in this paper, we could regard node pairs as contexts $\mathcal{C}$, structural information like community memberships as the treatment $\mathcal{T}$, and the presence of links as the outcome $\mathcal{O}$.
The conventional GNN-based link predictor typically follows the formulation $\mathcal{O}=f(\mathcal{C}, \mathcal{T})$. By incorporating counterfactual samples, the objective transforms into finding a GNN model that satisfies $\mathcal{O}=f(\mathcal{C}, \hat{\mathcal{T}})$, where the distribution of treatment has changed. In this manner, optimization guided by these two objectives inherently aids the GNN in more effectively capturing the underlying relationships between node pairs and enhancing its robustness to noise or unessential treatment, thus yielding more accurate predictions.
2. **Multi-modal fusion.** Multi-modal fusion refers to the process of combining information from multiple distinct sources or modalities to enhance the representation of machine learning. In our work, the twin GNNs actually play two distinct roles in mining graph data, and the embedding fusion may increase the whole model's robustness to noise and variations.
[1] Zhao et al. Learning from Counterfactual Links for Link Prediction. ICML 2022: 26911-26926
------
We sincerely appreciate the efforts and valuable feedback provided by the reviewer. We genuinely hope that our responses have addressed your concerns and contributed to a better understanding of our research. If you have any further questions or confusion, please do not hesitate to reach out to us. We would be more than willing to assist and provide further clarification.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank the authors for answering my questions. My concerns are addressed. The connection to counterfactual learning reads interesting to me. The authors may consider including this part in their final draft if space permits. I will keep my score. | null | null | null | null | null | null |
VOCE: Variational Optimization with Conservative Estimation for Offline Safe Reinforcement Learning | Accept (poster) | Summary: This paper utilizes probabilistic inference to address the problem of offline safe RL by introducing non-parametric variational distributions.
Pessimistic estimation of Q-values are used to avoid extrapolation errors caused by OOD actions. Extensive comparative numerical experiments are carried with respect to both reward and cost curve.
Strengths: 1. The paper has a solid chain of theoretical derivation.
2. The numerical results of VOCE seems promising.
Weaknesses: 1. The paper is not well-organized written. For example, several notations are not defined before used or at least not easy to find. For example, Q^{r}(s,a) used in equation 7, \pi_{M}(a_t|s_t) in equation 13. And it's more clear to have a complete summary of the main algorithm VOCE, rather than deriving literally step by step.
2. Some claims in the paper need further theoretical explanation or numerical experiments to clarify.
3. It's not well studied how key tuning parameters affect the numerical conclusion.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
1. The algorithm involves lots of parameters to tune, like \epsilon in equation 11, \chi in equation 17. I am a little bit suspicious of how robust those parameters affect the numerical results. Maybe some ablation study is needed to further clarify.
2. Why pessimistic estimation of Q-values can help avoid extrapolation errors caused by OOD actions. I don't get the intuition here. More theoretical analysis / explanation or numerical experiments are needed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No negative societal impact of their work is seen.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear review wQ6W:
Thank you for the insightful comments which help us improve the paper. We'll answer your questions one by one below. We are also very honored to share our understanding with you.
+ __Q1: "The paper is not well-organized written. $\cdots$ rather than deriving literally step by step." from the first weakness.__
__A1:__ Sorry for our unclear presentation. The $Q^{r}(s_t,a_t)$ represents the reward Q-value of taking action $a_t$ in state $s_t$. The $\pi_{\mathcal{M}}(a_t|s_t)$ refers to the policy corresponding to unseen action-state pairs. We have double-checked all variables in the manuscript and provided definitions for variables such as $\rho(s_0)$, $\epsilon$. The variable $\rho(s_0)$ denotes the distribution of the initial state $s_0$. The $\epsilon$ represents the KL divergence threshold between the variational distribution $q(a_t|s_t)$ and the policy $\pi(a_t,s_t)$. Then we will provide definitions for any undefined variables in the final version. We have presented the pseudo-code for the VOCE algorithm in Appendix A.1 of the manuscript. To enhance the clarity of the paper's structure, we will provide a summary and implementation details of the algorithm around its pseudo-code in the final version's appendix.
+ __Q2:"Some claims in the paper need further theoretical explanation or numerical experiments to clarify."__
__A2:__ Thank you for your suggestion. If you have any further questions or concerns about our work, please do not hesitate to let us know; we would be more than happy to provide clarifications. Additionally, we have done ablation experiments concerning the manually tuned hyperparameter $\epsilon$, and the results are depicted in Fig.1. The results displayed in the figure indicate that setting the parameter $\epsilon$ too small would reduce the convergence speed of the policy and may even diminish the algorithm's overall performance. Conversely, when $\epsilon$ is set too large, policy instability can arise. The experimental outcomes depicted in the graph demonstrate that within this range of $\epsilon \in [0.1,1]$, it ensures a favorable balance between convergence and stability for the policy. The parameter $\epsilon \in [0.1,1]$ represents a feasible range rather than an optimal interval.
+ __Q3: "It's not well studied how key tuning parameters affect the numerical conclusion."__
__A3:__ We thank you for your pointing out this issue. Regarding the manually tuned parameter $\epsilon$, we have done ablation experiments and the results are presented in Fig.1. Additionally, Regarding the hyperparameters $\kappa$ and $\chi$ mentioned in the manuscript, setting them too small makes it difficult to guarantee the algorithm's performance theoretically, while setting them too large leads to conservative behavior and a decrease in algorithm performance. Therefore, we adopt a dual-tuning approach with gradient-based adaptation instead of a manual setting. To facilitate the observation of the variations in trade-off factors $\kappa$ and $\chi$, we have documented the evolution curves of these factors during the training process. Furthermore, the variation curve of the trade-off factor $\kappa$ and $\chi$ during the model training process is depicted in Fig.2 . The results shown in the figure indicate that as the model gradually converges, the trade-off factor $\kappa$ and $\chi$ diminishes progressively and eventually stabilizes at a value greater than zero.
+ __Q4:"The algorithm involves lots of parameters to tune, $\cdots$ Maybe some ablation study is needed to further clarify." from the first question.__
__A4:__ We thank you for your pointing out this issue. Regarding the manually set hyperparameter $\epsilon$, we have done ablation experiments as shown in Fig.1. The rules governing its configuration have been analyzed in Q2. Additionally, $\chi$ is adjusted using a dual-tuning approach with the gradient descent strategy, not manually set. To facilitate the observation of the variations in trade-off factors $\kappa$ and $\chi$, we have documented the evolution curves of these factors during the training process. Furthermore, we have done a detailed analysis of its variation pattern in Q3.
+ __Q5: "Why pessimistic estimation of Q-values can help avoid extrapolation errors caused by OOD actions. I don't get the intuition here. More theoretical analysis/ explanation or numerical experiments are needed. "__
__A5:__ Interesting and insightful question. The pessimistic conservative estimation of Q-values is rooted in a pessimistic perspective that purely assumes that OOD actions not present in the sample data possess lower rewards and higher costs. Building upon this intuitive notion, it involves minimizing or maximizing the reward Q-values and cost Q-values associated with OOD actions during the Q-value assessment process. This approach aims to induce lower estimated values for reward Q-values concerning unseen actions and higher estimated values for cost Q-values pertaining to unseen actions. Ultimately, this results in the policy reducing the probability of producing out-of-distribution (OOD) actions, thereby mitigating the extrapolation error induced by OOD actions. Furthermore, this viewpoint is substantiated by existing relevant theoretical underpinnings [1]. On the other hand, our manuscript has done ablation experiments concerning conservative estimates for $Q^{r}$ and $Q^{c_i}$. The experimental findings also indicate a notable enhancement in algorithmic reward returns due to the conservative estimate for $Q^r$, and a certain reduction in the probability of policy constraint violation attributable to the conservative estimate for $Q^{c_i}$.
[1] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179–1191, 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply. I decide to increase my score to acceptance.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Dear review wQ6W:
We greatly appreciate your patient reply and extend our gratitude again for the valuable suggestions you have provided to enhance our work. | Summary: This paper introduces an interesting algorithm for offline safe RL, called Variational Optimization with Conservative Estimation (VOCE). The primary challenge tackled by the authors is the influence of safety constraints and out-of-distribution (OOD) actions, which often hamper the optimization of high-reward policies while maintaining safety. Traditional methods, such as linear programming and exploration-evaluation methods, have shown their limitations in coping with these challenges. In response, the authors propose a probabilistic inference-based framework that leverages Expectation and Maximization (EM) to provide more flexible policy optimization. It employs pessimistic estimation methods to calculate the Q-value of cost and reward, thus mitigating extrapolation errors due to OOD actions.
Strengths: 1. This works is an interesting extension of off-policy safe RL (particularly CVPO) to the offline setting. The application of probabilistic inference to the problem of offline safe RL introduces non-parametric variational distributions to replace parameterized policies, improving the flexibility for optimizing safe policies.
2. The paper's methodology is clear and well-presented. The derivation of upper and lower bounds for Q-value estimation using a pessimistic estimation approach, which is then utilized to estimate Q-values of costs and rewards to mitigate extrapolation errors from OOD actions.
3. Experiments demonstrate the superiority of the VOCE algorithm over baseline methods in terms of safety.
Weaknesses: 1. It seems that many claims and arguments are directly derived from CVPO, such as Propositions 3.4 and 3.5. The authors should properly cite the references accordingly. In addition, some other recent offline safe RL [methods](https://arxiv.org/abs/2302.07351) and [benchmarks](https://arxiv.org/abs/2306.09303) could be discussed in the paper.
2. I am not sure whether the EM optimization objectives in CVPO could be directly applied to the offline setting, since the introduction of the KL constraints in the E-step and M-step aims to regularize the policy update such that the behavior policy will not deviate too much and cause catastrophic failure in the online setting; however, in the offline setting, since the policy will not interact with the environment, this constraint seems not very necessary? Would regularizing the KL constraints between the variational distribution and the behavior policy in the datasets (with some density estimations for the behavior policies) be a better choice?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I think one major challenge for these constrained optimization-based approaches for offline safe learning is to accurately estimate the Q-value functions, since a subtle inaccuracy will be amplified by comparing Qc estimates and the target threshold after more and more optimization iterations. So I am curious how the authors think and tackle this problem, apart from using pessimism. In fact, pessimism alone can not solve the Q value mismatch between the current optimized policy and the behavior policy in the datasets.
2. Following up on the above question, I wonder how VOCE would perform in more diverse datasets with a wider span of trajectories over the cost-reward-return spaces. For example, collecting datasets with varying thresholds, hyper-parameters, and algorithms, as shown in [this](https://arxiv.org/abs/2306.09303) work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I did not find related discussions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear review kEdv:
Thank you for your all valuable suggestions and meticulous comments. We will incorporate your suggestions in the revision. Below we respond to your key concerns point by point. Please let me know if there are any further questions.
+ __Q1: "It seems that many claims and arguments are directly derived from CVPO, such as Propositions 3.4 and 3.5. The authors should properly cite the references accordingly."__
__A1:__ Thank you for your suggestions. This work of CVPO[1] has provided profound inspiration for our work. We have cited and discussed it in Section 3.2 of our manuscript. Furthermore, we will supplement the discussion in the final version regarding the proposition that "Similar theoretical foundations can be found in the online safe RL of CVPO[1], as outlined in Propositions 3.4 and 3.5."
+ __Q2: "In addition, some other recent offline safe RL methods and benchmarks could be discussed in the paper."__
__A2:__ Following your suggestion, we will incorporate a discussion on the CDT[2] algorithms in the related work section of the final version. Additionally, we will provide necessary comparative experiments based on the DSRL[3] dataset in the Appendix. Due to time constraints, we have done experiments of the VOCE algorithm in three scenarios of the DSRL dataset, and the experimental results are presented in Table 1 (The table is available in the supplementary PDF). Furthermore, we consider CDT to be a highly meaningful work that leverages the return-conditioned sequence modeling framework to facilitate zero-shot adaptation to various constraint thresholds during deployment while ensuring both safety and high rewards.
+ __Q3: "I am not sure whether $\cdots$ this constraint seems not very necessary? " from the second weakness.__
__A3:__ Insightful question. Introducing the KL divergence constraint during the optimization of the optimal variational distribution $q(a_t|s_t)$ can mitigate policy oscillations caused by the iteration of Lagrange multipliers in the optimization process. Furthermore, as we have already introduced the KL divergence constraint between the optimal variational distribution $q(a_t|s_t)$ and the previous iteration's policy $\pi_{\theta}(a_t|s_t)$ during the process of solving for the optimal variational distribution $q(a_t|s_t)$, imposing an additional constraint on the distance between the latest policy and the previous policy could lead to slow policy convergence and even performance degradation. Hence, we have not incorporated the KL divergence constraint in the process of updating the parameterized policy.
+ __Q4:"Would regularizing the KL constraints between the variational distribution and the behavior policy in the datasets (with some density estimations for the behavior policies) be a better choice?"__
__A4:__ Interesting and insightful question. You have provided a meaningful approach that utilizes KL divergence to constrain the distance between the variational distribution and the behavioral policy. This method aims to reduce the discrepancy between the learned policy and the behavioral policy based on sample data, potentially improving the algorithm's performance on expert and safe datasets. However, for non-expert and non-safe datasets, this approach might hinder the variational distribution from deviating away from unsafe behaviors, making it challenging for the final policy to satisfy safety constraints unless we specifically train our policy using expert safety data.
+ __Q5: "I think one major challenge for $\cdots$ optimized policy and the behavior policy in the datasets." from the first question.__
__A5:__ I fully agree with your perspective. One of the main challenges in offline safe reinforcement learning based on off-policy methods is ensuring the accuracy of Q-value estimation. Indeed, relying on strict upper bound estimation for $Q^{c_i}$ can lead to error accumulation. To mitigate the issue of Q-value mismatch between the current policy and the behavior policy after multiple iterations, we place full trust in the action-state pairs present in the sample data. Therefore, during the Q-value of cost $Q^{c_i}$ update, we maximize the $Q^{c_i}$ of action-state pairs present in the sample data and incorporate this trick into the VOCE code.
+ __Q6: "Following up on the above question, I wonder how VOCE would perform in more diverse datasets with a wider span of trajectories over the cost-reward-return spaces. For example, collecting datasets with varying thresholds, hyper-parameters, and algorithms, as shown in this work."__
__A6:__ Thank you for your suggestion. You mentioned that the benchmark provides an excellent testing methodology, which evaluates the algorithm's performance by testing it under various cost thresholds. This method effectively captures the algorithm's performance across different cost thresholds and hyperparameters. Therefore, we are considering including our algorithm's performance on this benchmark in the final version. Currently, we have included the results from several tasks we have conducted, as shown in Table 1 (The table is available in the supplementary PDF). Additionally, another recent work of ours also considers evaluating our algorithm's performance based on this benchmark.
[1] Zuxin Liu, Zhepeng Cen, Vladislav Isenbaev, Wei Liu, Steven Wu, Bo Li, and Ding Zhao. Constrained variational policy optimization for safe reinforcement learning. In International Conference on Machine Learning, pages 13644–13668. PMLR, 2022.
[2] Zuxin Liu, Zijian Guo, Yihang Yao, Zhepeng Cen, Wenhao Yu, Tingnan Zhang, and Ding Zhao. Constrained decision transformer for offline safe reinforcement learning. arXiv preprint arXiv:2302.07351, 2023a.
[3] Zuxin Liu, Zijian Guo, Haohong Lin, Yihang Yao, Jiacheng Zhu, Zhepeng Cen, Hanjiang Hu, Wenhao Yu, Tingnan Zhang, Jie Tan, et al. Datasets and benchmarks for offline safe reinforcement learning. arXiv preprint arXiv:2306.09303, 2023b.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the detailed reply and clarification. Most of my concerns are well-addressed, and I am hoping to see the revision with better paper presentations. I decide to increase my score to acceptance.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Dear review kEdv:
We greatly appreciate your patience reply. We are also grateful for your recognition of our efforts and the positive evaluation of our work. Once again, we extend our gratitude for your valuable feedback. Your suggestions are of great value in enhancing the quality of our work. | Summary: This paper applies CQL to offline safe RL tasks. It introduces a non-parametric policy to search for the actions that satisfy the constraints. Lagrangian multipliers are introduced for constraining the KL divergence w.r.t. the current policy and the additional constraints. CQL-style additional terms are introduced to avoid extrapolation issues during the offline training. The method is evaluated at four safe RL environments and demonstrates that the proposed approach can achieve better rewards while better approaching the cost limits.
Strengths: - The paper proposed a solid approach that consists necessary ingredients for offline-safe RL. The conservative estimation follows CQL and should be able to mitigate the issues in offline RL.
- An optimal variational distribution is introduced to improve the policy in a non-parameterized way. This step is novel in offline safe RL.
- The paper proposed an offline safe RL benchmark and made a careful study.
Weaknesses: - The novelty of the method could be more extensive. The method mostly combines existing methods. The variational method, non-parameterized policy estimation, constraint formulation, and conservative Q estimation are all standard methods.
- Some mathematical notations are not clear. For example, Eq (19) and Eq (20) are quite misleading. I think it should be log \pi_\theta(a_t|s_t) in Eq(2).
- If my understanding is correct, the proposed method is not able to reach the cost limitation in several point-button and car-button environments, which suggests the inadequate of the proposed method.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I would like to know the reason that the authors chose to follow CQL instead of other offline RL algorithms. Since there is already a KL term in the optimization objective, I think Behavior cloning or advantage-weighted approach, which directly constrains the KL divergence between the learned policy and the behavior policy, is more straightforward here and may achieve higher performance in practice. Is there any reason to consider the conservative Q estimation rather than other choices?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: not discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear review GTEu:
Thank you for your suggestions and constructive comments. We will incorporate your suggestions in the revision. Below we respond to your key concerns point by point. Please let me know if there are any further questions.
+ __Q1:" The novelty of the method could be more extensive. The method mostly combines existing methods. The variational method, non-parameterized policy estimation, constraint formulation, and conservative Q estimation are all standard methods."__
__A1:__ We agree that our VOCE method has been inspired by these advanced RL techniques. Although our method is developed from the existing techniques, we extend it to the setting of offline safe reinforcement learning, thereby significantly broadening their applicability. Furthermore, the proposed method achieves superior performance in offline safe reinforcement learning tasks compared to the current state-of-the-art approaches. Now, we proceed to delineate the primary contributions of our VOCE method and highlight its key distinctions from existing approaches in the following aspects.
(1) We propose a novel variational optimization with conservative estimation for offline safe reinforcement learning based on variational inference and pessimistic estimation methods to solve the problem of optimizing safety policies in the offline dataset. To the best of my knowledge, this is the first time that variational inference has been introduced to solve the task of offline safe reinforcement learning.
(2) Subsequently, we extend the pessimistic estimation to the offline safe reinforcement learning setting, deriving for the first time an upper bound for Q-value. We employ this upper bound to estimate the cost Q-value $Q^{c_i}$, thereby avoiding underestimation of the cost Q-value $Q^{c_i}$.
(3) In the field of offline safe reinforcement learning, we introduce for the first time the validation of algorithm performance using sample data under various safety ratios $\varphi$. Furthermore, extensive experiments demonstrate that our algorithm significantly offers competitive performance, particularly in terms of safety, as evidenced by the results.
+ __Q2:" Some mathematical notations are not clear. For example, Eq (19) and Eq (20) are quite misleading. I think it should be $log{\pi}_{\theta}(a_t|s_t)$ in Eq(20)."__
__A2:__ We are sorry for the unclear mathematical representation, and we greatly appreciate your suggestions. We have revised Eq.(19) and (20) according to your suggestions, as shown below:
$\mathcal{L}(\theta)=\max \mathbb{E}\_{\tau\sim q} \left[-\alpha D_{KL}(q(\cdot|s_t)||\pi_{\theta}(\cdot|s_t))\right]= \max \alpha \mathbb{E}\_{\rho(s_0)} \mathbb{E}\_{q(a_t | s_t)} [\log {\pi_{\theta}(a_t|s_t)}-\log q(a_t|s_t)]$
$\mathcal{L}(\theta)=\max \mathbb{E}\_{\rho(s_0)}\mathbb{E}\_{q(a_t | s_t)} [\log {\pi_{\theta}(a_t |s_t)}]$
Additionally, we have double-checked all the equations in the manuscript and will update any inaccurate representations in the final version.
+ __Q3:"If my understanding is correct, the proposed method is not able to reach the cost limitation in several point-button and car-button environments, which suggests the inadequate of the proposed method."__
__A3:__ Yes, your understanding is correct. In the *Point-button* and *Car-button* scenarios, there are continuously moving obstacles, which significantly increase the dimensionality of the dynamic transition matrix $P$ compared to the *Point-goal* and *Car-goal* scenarios. However, the size of our sample dataset in all scenarios is 1e7 action-state pairs, which leads to a significant increase in OOD actions in the *Point-button* and *Car-button scenarios*, thereby affecting the algorithm's performance. Furthermore, even though the cumulative costs in these two scenarios do not meet the safety constraints, the cumulative costs achieved by the VOCE algorithm are lower than those of the previous state-of-the-art algorithms. This suggests that the performance of the VOCE algorithm surpasses that of the previous state-of-the-art algorithms. To ensure the policy's adherence to safety constraints in both the *Point-button* and *Car-button* scenarios, we can explore the following approaches:(1) Increasing the amount of sample data in these scenarios to mitigate the impact of OOD actions during policy training. (2) Increasing the minimum pruning threshold for hyperparameter $\chi$, however, may lead to a significant reduction in reward values.
+ __Q4: "I would like to know the reason that the authors chose to follow CQL instead of other offline RL algorithms. Since there is already a KL term in the optimization objective, I think Behavior cloning or advantage-weighted approach, which directly constrains the KL divergence between the learned policy and the behavior policy, is more straightforward here and may achieve higher performance in practice. Is there any reason to consider the conservative Q estimation rather than other choices? "__
__A4:__ Insightful question. You have presented an excellent approach by constraining the target policy to the behavior policy using the KL divergence. Under the safety of expert sample data, this method enables the target policy to achieve high rewards while adhering to safety constraints. However, it is challenging to guarantee that the policy satisfies safety constraints, especially with non-expert and unsafe datasets such as $\varphi =\{0.4,0.6 \} $. We estimate $Q^{r}$ and $Q^{c_i}$ respectively through the upper and lower bounds of Q-value estimation, which can theoretically ensure that the strategy meets the safe constraints, and also avoid the need to add additional constraints to increase the instability of model training when solving the optimal variational distribution. | Summary: The paper presents a variational approach to offline safe reinforcement learning (RL), where the class of variational distributions is the set of policies which satisfy the cost constraints. It is shown how to obtain the closed-form solution for the optimal variational distribution, and how to extract a parametric policy given the variational distribution. To guarantee that the reward Q function is not overestimated and the cost Q function is not underestimated, the tools of Conservative Q Learning (CQL) are incorporated.
Strengths: * The variational formulation is novel to my knowledge and may inspire additional work in this area.
* VOCE performance is good compared to existing algorithms, both in terms of high reward and satisfying the cost constraint.
* The paper is generally clear and not hard to read.
Weaknesses: * There is no discussion of the runtime of the algorithm, which I imagine is comparatively high, given that you have to solve an optimization problem (Eqn. 11) at every step.
* While the final hyperparameters are provided, we are not told how they were tuned, nor do we have a sense of how sensitive VOCE’s performance is to their values.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * How are the trade-off factors ($\kappa$, $\chi$) determined? Are you using a dual tuning method akin to the $\alpha$ in CQL?
* In Eqn. (20), should be $a_t$ instead of $\cdot$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: No, limitations are not addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear review cmps:
Thank you for your all valuable suggestions and meticulous comments. We will incorporate your suggestions in the revision. Below we respond to your key concerns point by point. Please let me know if there are any further questions.
+ __Q1: "There is no discussion of the runtime of the algorithm, which I imagine is comparatively high, given that you have to solve an optimization problem (Eqn. 11) at every step."__
__A1:__ We greatly appreciate your raising this question. We have documented the single-step time consumption of the VOCE and existing methods during both training and testing processes. Additionally, we have recorded the time consumption of each module of the VOCE algorithm during the training process. The recorded results are presented in Tables 1 and 2, respectively. The results shown in Table 1 indicate that during the training process, the inclusion of additional networks and optimization parameters in VOCE indeed leads to a noticeable increase in the time consumption per step compared to other methods. However, during the testing process, since only the policy network needs to be executed, the per-step time consumption of the VOCE algorithm is only slightly different from other algorithms. Furthermore, the execution time per step during testing is significantly below 1 ms, which satisfies the real-time requirements of practical applications such as robot control and autonomous driving. Furthermore, as evident from the results in Table 2, higher time consumption is observed during Q-value evaluation due to the computation of multiple target Q-values and gradients. On the other hand, the time consumption is relatively lower for the Lagrange multipliers $\lambda, \eta$ as they are updated only once per step utilizing the gradients.
__Table 1 The computational time per single step of the algorithm during training and testing processes.__
|Algorithms|Training(s)|Testing(s) |
| ---------| --------- | --------- |
|VOCE |3.1498 |3.0665×1e-4|
|C-CRR |0.0891 |1.5140×1e-4|
|COptiDICE |0.0358 |1.4316×1e-4|
|BCQ-Lag |0.1104 |2.7976×1e-4|
__Table 2 Time consumption during the training processes of various modules in VOCE.__
| $Q^{r}(s_t,a_t)$ | $Q^{c}(s_t,a_t)$ | $(\lambda, \eta)$ | $q(a_t\|s_t)$ | $\pi_{\theta}(a_t\|s_t)$ |
|:---------:|:---------:|:---------:|:---------:|:---------:|
| 1.2728 | 1.2668 | 0.0026 | 0.5533 | 0.0543 |
+ __Q2: "While the final hyperparameters are provided, we are not told how they were tuned, nor do we have a sense of how sensitive VOCE’s performance is to their values. How are the trade-off factors ($\kappa$, $\chi$) determined? Are you using a dual tuning method akin to the $\alpha$ in CQL? "__
__A2:__ Interesting and insightful question. Yes, we have employed a dual-tuning approach similar to CQL. We take into consideration that setting the trade-off factor $\kappa$ too large might result in a significant underestimation of the Q-values $Q^{r}$, while putting it too small could struggle to suppress the overestimation of Q-values $Q^{r}$ caused by OOD actions. Manually adjusting these parameters is time-consuming; hence, we follow the dual tuning methodology of CQL, utilizing gradient descent for adaptive parameter $\kappa$ adjustment. The tradeoff factor $\chi$ is also set using a similar method. To facilitate the observation of the variations in trade-off factors $\kappa$ and $\chi$, we have documented the evolution curves of these factors during the training process. The variation curve of the trade-off factor $\kappa$ and $\chi$ during the model training process is depicted in Fig.2 . The results shown in the figure indicate that as the model gradually converges, the trade-off factor $\kappa$ and $\chi$ diminishes progressively and eventually stabilizes at a value greater than zero.
+ __Q3:"In Eqn. (20), should be $a_t$ instead of $\cdot$ ?"__
__A3:__ Thank you for your suggestion. We have revised Eq. (20) to the following expression:
$\mathcal{L}(\theta)=\max \mathbb{E}\_{\rho(s_0)}\mathbb{E}\_{q(a_t | s_t)} [\log {\pi_{\theta}(a_t |s_t)}]$
Additionally, we have double-checked all the equations in the manuscript, and we have revised Eq.(19) to:
$\mathcal{L}(\theta)=\max \mathbb{E}\_{\tau\sim q} \left[-\alpha D_{KL}(q(\cdot|s_t)||\pi_{\theta}(\cdot|s_t))\right]= \max \alpha \mathbb{E}\_{\rho(s_0)} \mathbb{E}\_{q(a_t | s_t)} [\log {\pi_{\theta}(a_t|s_t)}-\log q(a_t|s_t)]$
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I think the discussion of runtime would be useful context to include, even if only in the appendix.
Regarding hyperparameters:
* "The results shown in the figure indicate that as the model gradually converges, the trade-off factor $\kappa$ and $\chi$ diminishes progressively and eventually stabilizes at a value greater than zero." It is not clear from the plots that $\kappa$ and $\chi$ converge to positive values – it looks like they go to zero. Perhaps it would be more apparent if you used a log-y scale for these plots?
* A description of how the dual tuning is performed would be a useful addition. In particular, you need to introduce additional constraints, so (i) what are these constraints, and (ii) how do you choose the hyperparameters involved in these constraints?
---
Reply to Comment 1.1.1:
Title: Response
Comment: Dear review cmps:
Thank you very much for your reply. Your valuable suggestions hold significant implications for enhancing the quality of our work. We will include a discussion on the model's runtime in the appendix of the final version.
__Q4: "The results shown in the figure indicate that as the model gradually converges, the trade-off factor and diminishes progressively and eventually stabilizes at a value greater than zero." It is not clear from the plots that and converge to positive values – it looks like they go to zero. Perhaps it would be more apparent if you used a log-y scale for these plots? "__
__A4:__ Thank you for your suggestion. Adopting a logarithmic scale does indeed provide a clearer representation of the variation trends and stability of the trade-off factor. We have transformed the trade-off factor's variation curve into a logarithmic coordinate system. However, we are currently unable to submit images. We present the stable values of the trade-off factor in the table below.
__Table 1 Real and logarithmic values of the balancing factors $\kappa$ and $\chi$__
| | 10 | 20 | 30 | | 29960 | 29970 | 29980 | 29990 | 30000 |
|-----------|--------------|--------------|--------------|---|--------------|--------------|--------------|--------------|---------------|
| $\kappa$ | 0.99631976 | 0.9858102 | 0.97578514 | … | 1.55E-05 | 1.55E-05 | 1.54E-05 | 1.54E-05 | 1.53E-05 |
| $\log\kappa$ | -0.003687029 | -0.014291438 | -0.02451286 | … | -11.07079701 | -11.0749356 | -11.07907 | -11.0832096 | -11.08735392 |
| $\chi$ | 0.99612333 | 0.98561966 | 0.97548907 | … | 1.41E-05 | 1.40E-05 | 1.40E-05 | 1.39E-05 | 1.38E-05 |
|$\log\chi$ | -0.003884204 | -0.014484739 | -0.024816323 | … | -11.17112616 | -11.17524275 | -11.17934963 | -11.18346784 | -11.18759897 |
__Q5:A description of how the dual tuning is performed would be a useful addition. In particular, you need to introduce additional constraints, so (i) what are these constraints, and (ii) how do you choose the hyperparameters involved in these constraints?__
__A5:__ Thank you very much for your suggestion. Dual tuning involves adapting the balancing trade-off factors through a gradient descent strategy.
Concretely, building upon the optimization objective of the Bellman iteration, we introduced a trade-off factor $\kappa$. This factor facilitated the minimization of the reward Q-values of action-state pairs under the marginal distribution of unseen actions $\mathbb{E}\_{s_t\sim {D},a_t \sim \pi_{\mathcal{M}}(a_t|s_t)} Q^{r}(s_t,a_t)$, as well as the maximization of the reward Q-values of action-state pairs occurring within the sample space $\mathbb{E}\_{s_t\sim {D},a_t \sim \hat\pi_{\beta}(a_t|s_t)} Q^{r}(s_t,a_t)$. Furthermore, by taking the partial derivative of the trade-off factor $\kappa$ with respect to the Eq.(13) in the manuscript, we subsequently employed the gradient descent strategy to adaptively adjust the balancing factor $\kappa$. On the other hand, building upon the optimization objective of the Bellman iteration, we introduced the trade-off factor $\chi$ to maximize the cost Q-values of action-state pairs under the marginal distribution of unseen actions $\mathbb{E}\_{s_t\sim {D},a_t \sim \pi_{\mathcal{R}}(a_t|s_t)} Q^{c_i}(s_t,a_t)$. Furthermore, by taking the partial derivative of the trade-off $\chi$ with respect to Eq. (17) in the manuscript, we subsequently employed the gradient descent strategy to adaptively adjust the trade-off factor $\chi$. | Rebuttal 1:
Rebuttal: Dear reviewers:
We thank all reviewers for your time and suggestions, and we expect to have a further discussion. We have responded to your questions in detail accordingly. If you have further questions or concerns, we still reply before the end of the author-reviewer discussion. Thank you very much for your review time and efforts.
+ __How should the hyperparameters $\kappa$, $\chi$, and $\epsilon$ be set?__
__A:__ **(1)** Regarding the hyperparameters $\kappa$ and $\chi$ mentioned in the manuscript, setting them too small makes it difficult to guarantee the algorithm's performance theoretically, while setting them too large leads to conservative behavior and a decrease in algorithm performance. Therefore, we adopt a dual-tuning approach with gradient-based adaptation instead of a manual setting. To facilitate the observation of the variations in trade-off factors $\kappa$ and $\chi$, we have documented the evolution curves of these factors during the training process. The variation curve of the trade-off factor $\kappa$ and $\chi$ during the model training process is depicted in Fig.2 . The results shown in the figure indicate that as the model gradually converges, the trade-off factor $\kappa$ and $\chi$ diminishes progressively and eventually stabilizes at a value greater than zero.
**(2)** Regarding the manually set hyperparameter $\epsilon$, we have conducted ablation experiments as shown in Fig.1 . The results displayed in the figure indicate that setting the parameter $\epsilon$ too small would reduce the convergence speed of the policy and may even diminish the algorithm's overall performance. Conversely, when $\epsilon$ is set too large, policy instability can arise. The experimental outcomes depicted in the graph demonstrate that within this range of $\epsilon \in [0.1,1]$, it ensures a favorable balance between convergence and stability for the policy. The parameter $\epsilon \in [0.1,1]$ represents a feasible range rather than an optimal interval.
+ __Limitations and Future Work__
__A:__ The setup of this study involves learning policies that satisfy safety constraints from offline data without interacting with the environment. Consequently, the size and quality of the dataset have a direct impact on the algorithm's performance. Furthermore, within the context of the offline setting, the limited sample data available makes it challenging to adequately represent the state transition matrix p in non-stationary environments. As a result, the VOCE algorithm faces difficulties in learning high-reward policies that satisfy safety constraints in non-stationary environments. We will supplement the discussion of the limitations of this work in the final version.
Pdf: /pdf/72fbea8cadda01fe499ec13ec557d1e0bf3b94b7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Q-DM: An Efficient Low-bit Quantized Diffusion Model | Accept (poster) | Summary: This paper presents a quantization-aware training framework for diffusion models. The authors identified activation distribution oscillation and quantization error accumulation as the main causes of the performance drop. To close the performance gap, the authors developed a timestep-aware quantization (TaQ) method for data normalization and a noise-estimating mimicking (NeM) training scheme. Experiments were conducted to demonstrate the effectiveness of the proposed method.
Strengths: 1. The motivation of this paper is valid. The proposed method is reasonable. The shifting of the activation distributions and the quantization error accumulation are quite likely to affect the overall performance. The authors propose Timestep-aware Quantization (TaQ) and Noise-estimating Mimicking (NeM) accordingly to address these two problems.
2. The paper is well-organized and the writing is fluent. This helps the readers capture the high-level ideas well and the detailed experimental setups make this paper easy to replicate the results.
3. The experimental results are promising. Q-DMs demonstrate performance on par with FP models.
Weaknesses: 1. Will the TaQ quantization process affect the inference speed? See 3.b in the ``Questions'' section.
2. Some technical details should be clarified (see the questions below).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. What does "PTQ4DM" exactly refer to in Tab. 2? Tab.2 cites [1] while referring PTQ4DM to [2] in Line 243. As far as I am concerned, PTQ4DM refers to a recently published work [2], presenting a method for calibration data collection in diffusion quantization, then what does Line 242 mean by saying "we also report the **classification performance**..." ?
2. I have some questions regarding the method details, I would like to ask for some clarifications:
a. Did you calculate the statistical mean and variance of each activation in an offline manner, i.e. the statistical values were pre-computed with the full-precision DMs and not updated with the training procedure?
b. How did you conduct inference with such the timestep-aware design? Did you conduct activation normalization with the pre-computed values on-the-fly? If so, how would it affect the inference efficiency?
c. How is the "distance" in Figure 4 calculated? Also, is the activation distribution in Figure 3 layer-wise or channel-wise?
d. Is it possible to combine the original training loss in DMs with the NeM loss?
[1] Post-training piecewise linear quantization for deep neural networks
[2] Post-Training Quantization on Diffusion Models
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors do not include the limitations and potential negative societal impact in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: What does "PTQ4DM" exactly refer to in Tab. 2? Tab.2 cites [1] while referring PTQ4DM to [2] in Line 243. As far as I am concerned, PTQ4DM refers to a recently published work [2], presenting a method for calibration data collection in diffusion quantization, then what does Line 242 mean by saying "we also report the classification performance..." ?
**A1**: Sorry for the typo, the citation should be “Post-training quantization on diffusion models [2]”, and the Line 242 should be “We also report the generation performance of the 8-bit PTQ method”. The typos will be revised it in the final version.
**Q2(a)**: Did you calculate the statistical mean and variance of each activation in an offline manner, i.e. the statistical values were pre-computed with the full-precision DMs and not updated with the training procedure?
**A2(a)**: Only the statistical values in Fig. 3 are calculated by a full-precision diffusion model in an offline manner, to explain our motivation. In our QAT process, the statistical mean and variance of each activation are calculated on-the-fly. We apology for the typo and misleading in the caption. We will polish it in the final version.
**Q2(b)**: How did you conduct inference with such the timestep-aware design? Did you conduct activation normalization with the pre-computed values on-the-fly? If so, how would it affect the inference efficiency?
**A2(b)**: The calculation of statistical mean and variance is performed on-the-fly during inference. As shown in Tab. 2, the increase of flops is negligible. (Copy from Q2 of R\#bfDz).
**Q2(c)**: How is the "distance" in Figure 4 calculated? Also, is the activation distribution in Figure 3 layer-wise or channel-wise?
**A2(c)**: In Fig. 4, we calculate the distance between the same layer of the full-precision DMs and quantized counterparts. In Fig. 3, the activation distribution is about all elements in one specific layer, i.e., layer-wise distribution. We will add these detailed descriptions in the final version.
**Q2(d)**: Is it possible to combine the original training loss in DMs with the NeM loss?
**A2(d)**: We have conducted experiments of combining the original training loss in DMs with the NeM loss. We evaluate the combination of original loss in DDPM [1] and NeM with 4-bit Q-DM based on 50-step DDIM sampler with 32×32 generating resolution on CIFAR-10, in below. As can be seen, this combination affects the performance of Q-DM. The best result (in **bold**) is achieve by singly combining the 4-bit baseline with our NeM model. Hence, we find that the NeM method performs best when singly being applied. These results and analysis will be added in the final version.
| Method | FID$\downarrow$ | IS $\uparrow$|
| ------ | ----- | ------ |
| Full-precision | 4.67 | 9.27 |
| 4-bit Baseline (LSQ) + Original Loss [1] | 10.22 | 8.91 |
| **4-bit Baseline (LSQ) + NeM** | **8.98** | **8.92** |
| 4-bit Baseline (LSQ) + Original Loss [1] + NeM | 9.05 | 8.87 |
[1] Post-training Piecewise Linear Quantization for Deep Neural Networks. ECCV'2020.
[2] Post-Training Quantization on Diffusion Models. CVPR'2023.
[3] Denoising Diffusion Probabilistic Models. NeurIPS'2020.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for the authors' response. My concerns have been addressed. I keep my rating as Accept.
---
Reply to Comment 1.1.1:
Title: Thanks for the reviewing
Comment: We express our appreciation for the time you dedicated to reviewing our paper. | Summary: The paper proposed a quantization-aware training scheme for diffusion models, based on the well-known method, LSQ.
In the paper, they identified the bottleneck come from a large distribution oscillation on activations and accumulated quantization error caused by the denoising process.
Then, they suggest method to address the issues:
Timestep-aware quantization (TaQ) – Time Step-aware activation smoothing to handle a large oscillation on activation
Noise-estimating Mimicking (NeM): reduceing error accumulation throughout the multi-step denoising process with quantization
Strengths: They observed and identified the issues when compressing DMs in a quantization-aware training manner.
Weaknesses: 1. The suggested methods to tackle the issues in QAT for DM are not novel. TaQ and NeM seem just normalization and knowledge distillation which can be commonly used in the related literature.
2. There is much lack of numerical descriptions for the QAT process, such as training time, the number of calibration datasets, and the number of epochs, etc. They have to provide detail about how many resources are needed for reproduction.
3. The models used in the paper to show their performance are a little bit outdated. The datasets they used are also insufficient to prove the performance. They should provide more thorough experimental results with recent models and datasets.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Generally, QAT requires more data and training time than PTQ, and U-net inherently has a large number of parameters, so requires a lot of training data and takes a long time to optimize. How much data is used for QAT for DM? And how much time did you take to complete the quantization for U-net?
2. When performing inference, computing the statistical mean and variance are needed every time step? If so it imposes the additional cost for H/W. How are these values considered at inference time?
3. As a naive approach, we may have a different step size of activation for each time step to handle the oscillation on activations. Is there any problem with the naive approach compared to your approach?
4. PTQ4DM and DFQ also handle the dynamic range of activations. Compared with them, what are the advantages of your method?
5. The problem you observed in quantizing DM may have been dealt with in other literature, even if it is not related to quantization and DM. For example, there are many approaches to normalize the value. The paper would be more robust if the authors provide sufficient related works handling the issues the authors addressed.
6. In Eq(11) and (12), TaQ quantize normalized activations. Is there no need for compensation for the normalization?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Due to the character number constraint of rebuttal, we abridge the question in the rebuttal part. All experiments below are conducted on 50-step DDIM sampler with 32×32 generating resolution on CIFAR-10 dataset.
**Q1**: Regarding TaQ and NeM.
**A1**: Our method is proposed based on the observed activation oscillation and quantification error accumulation phenomena. And the QAT method used for the diffusion model is far from being explored. The experimental results also show the advantages of our method.
**Q2**: Regrading numerical descriptions for the QAT process.
**A2**: Sorry for the missing. The training epoch and training time of DDPM and DDIM is 80k step, which needs 6 GPU days. In the QAT of diffusion models, there is no calibration dataset. We will these description in the final version.
**Q3**: Regarding experiments with recent models and datasets.
**A3**: We have conducted experiments of other models and datasets. Please also see the Q3 & Q4 of Reviewer mdg9.
**Q4**: Regarding the data and training cost of Q-DM.
**A4**: We employ the complete set of images from the training dataset, which are same as other QAT methods [3-5]. The training cost comparison between Q-DM and baseline LSQ is shown below. Our Q-DM is about 1.6% lower than baseline method but achieves higher performance. We will add explanations in the final version for clarity.
|Method|Training time (gpu days)|
|--|--|
|Baseline|6.0|
|Q-DM|6.1|
**Q5**: Regarding computation cost of TaQ.
**A5**: We reply to this question point-to-point.
- Yes, each step requires computation, but the computational cost is negligible. The OPs value are shown in Tab. 2 of the submission, which include the inference costs of Q-DM with or without the mean and variance computation.
- We further detailed calculate the OPs of Q-DM and TaQ module, as an example, in the below table. As shown, the extra cost of TaQ module is counted less than 1% in terms of total OPs, which is almost negligible during inference.
|Method|\#Bits|OPs|TaQ OPs (ratio)|
|--|--|--|--|
|Full-precision|32/32|390.4|- (-)|
|Q-DM|4/4|49.9|0.10 (0.2%)|
|Q-DM|3/3|25.1|0.10 (0.4%)|
|Q-DM|2/2|12.6|0.10 (0.8%)|
**Q6**: Regarding different step size
**A6**: Using different step sizes for each weight at every time step will result in increased storage and computational burden. As shown below, our method achieves higher compression and acceleration ratio.
|Method|\#Bits|Size|OPs|
|--|--|--|--|
|Q-DM |4/4|0.56|49.98|
|Different step size|4/4|0.57|50.01|
|Q-DM|3/3|0.28|25.12|
|Different step size|3/3|0.29|25.15|
|Q-DM|2/2|0.14|12.66|
|Different step size|2/2|0.15|12.69|
**Q7**: Regarding advantages of Q-DM.
**A7**: We state the difference and advantages of out method in two aspects: in technology and in performance.
- **Regarding technology**: PTQ4DM [6] is a PTQ method designed for the diffusion model, which can partially address the activation oscillation issue by constructing a more suitable calibration dataset (not applicable in the QAT process, see Fig. 4 in [7]). DFQ [8] eliminates this phenomenon through cross-layer equalization for backbones regarding image classification and detection task, not suitable for the QAT process of DMs. (also a particular phenomenon in the PTQ process), which is not suitable for QAT methods. In contrast, our method aims to improve the QAT process for DMs and successfully eliminates the activation oscillation issue, from a perspective of QAT-specific technology.
- **Regarding performance**: Compared with PTQ methods, the proposed Q-DM also benefits from the main advantage of the QAT process, i.e., superior performance with lower bit-width precision. We further evaluate the PTQ4DM and DFQ in 8- and 4-bit bit-width, where the results are shown in below. We have two main observations: 1) the 4-bit Q-DM achieves better performance that 8-bit PTQ methods while possessing lower-precision weights and activations; 2) the PTQ methods deteriorates in 4-bit format, while Q-DM performs well with lower precision. These observations demonstrate the superiority of the Q-DM method.
|Method|\#Bits|FID$\downarrow$|IS$\uparrow$|
|--|--|--|--|
|Full-precision|32/32|4.67|9.27|
|PTQ4DM |8/8|18.02|8.87|
|DFQ|8/8|18.96|8.83|
|PTQ4DM|4/4|19.78|8.76|
|DFQ |4/4|20.02|8.68|
|**Q-DM**|4/4|**8.98**|**8.92**|
**Q8**: Difference of Q-DM and other normalization.
**A8**: Different from prior works, our TaQ numerically analyzes the activation ranges across different timesteps and effectively mitigates distribution oscillation specifically for low-bit quantized DMs. To validate, we show some experimental comparisons with the prior methods which normalize the activation value as below. We will add more references and this comparison in the final version for clarity.
|Method|\#Bits|FID$\downarrow$|IS$\uparrow$|
|--|--|--|---|
|Q-ViT|4/4|9.48|8.85|
|**Q-DM**|4/4|**8.98**|**8.92**|
**Q9**: Regarding compensation for normalization?
**A9**: We conduct experiments regarding normalization and its compensation as below. As shown, our TaQ achieves better performance with less extra parameters introduced. This will be added in the final version.
|Formulation of Eq. (11)|FID$\downarrow$|IS$\uparrow$|
|--|--|--|
|**a** (Baseline) |8.98|8.92|
|**Norm(**a**) (TaQ)**|**6.89**|**8.96**|
|Norm(**a**)*scale |6.92|8.95|
|Norm(**a**)*scale+bias|6.91|8.95|
[1] Denoising Diffusion Probabilistic Models. NeurIPS'2020.
[2] Denoising Diffusion Implicit Models. ICLR'2021.
[3] Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer. NeurIPS'2022.
[4] Q-DETR: An Efficient Low-Bit Quantized Detection Transformer. CVPR’2023.
[5] Post-Training Quantization on Diffusion Models. CVPR'2023.
[6] A Survey of Quantization Methods for Efficient Neural Network Inference. ArXiv: 2103.13630.
[7] Data-Free Quantization Through Weight Equalization and Bias Correction. ICCV'2019.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for your effort to address my concern. I acknowledge your work applying QAT to diffusion models for the first time (which might be helpful when quantizing the models), but I am still concerned about the originality and novelty of the paper. so I keep the score as it stands.
---
Reply to Comment 1.1.1:
Title: Thanks for the reviewing
Comment: We extend our gratitude for your diligent review of our paper. It is important to underscore that we introduce QAT technology into diffusion models for the first time, yielding notable outcomes in quantized diffusion models with 2/3/4 bit-widths. | Summary: In this paper, a novel method called Q-DM is introduced, which enables the creation of low-bit quantized diffusion models. The authors first give extensive analysis about two primary challenges faced by low-bit quantized DMs: significant distribution oscillation on activations and accumulated quantization error arising from the multi-step denoising process. To address these issues, the authors propose two techniques: Timestep-aware Quantization (TaQ) and Noise-estimating Mimicking (NeM). TaQ is designed to mitigate the distribution oscillation problem, while NeM aims to reduce the accumulated quantization error. By incorporating these techniques into Q-DM, the paper demonstrates its ability to overcome these challenges effectively, leading to superior performance compared to existing methods. The experimental results provided in the paper serve as evidence of the high-quality performance achieved by Q-DM.
Strengths: 1.This paper is easy to follow. The organization of this paper are exemplary, as it effectively presents the proposed Q-DM in a clear and comprehensible manner. The paper provides comprehensive explanations of how Q-DM improved the performance of quantized DMs.
2.This paper presents an novel quantization method known as Q-DM, which addresses several challenges in low-bit quantized diffusion models. The authors propose the Timestep-aware Quantization (TaQ) method to tackle the issue of activation distribution oscillation, which is caused by the random-sampled timestep during training. Additionally, they introduce the Noise-estimating Mimicking (NeM) scheme to effectively minimize accumulated errors.
3.The experimental results are significant. The Q-DMs exhibit performance comparable to that of full-precision models, showcasing the effectiveness of the proposed methods.
Weaknesses: 1.Is the model quantized during all the training and sampling process? Since these may lead to different inference speed. And are the quantization-related parameter all the same across different timestep?
2.Is the proposed TaQ method only used in Q-AttnBlock module? And how is the conv layer quantized in Q-DM?
3.In Table 2, how is the ‘OPs’ calculated? It would be more detailed if authors can provide some description about this metric.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: Is the model quantized during all the training and sampling process? Since these may lead to different inference speed. And are the quantization-related parameter all the same across different timestep?
**A1**: Yes, the model is quantized during both the training (to quantized value in floating-point format [1]) and sampling process. Also, the quantization-related parameter are all the same across different timestep during inference.
**Q2**: Is the proposed TaQ method only used in Q-AttnBlock module? And how is the conv layer quantized in Q-DM?
**A2**: The proposed TaQ method is only deployed in the Q-AttnBlock module (Eq. (12)). The quantization method of convolution layer in Q-DM is same as LSQ [2], i.e., the baseline method.
**Q3**: In Table 2, how is the ‘OPs’ calculated? It would be more detailed if authors can provide some description about this metric.
**A3**: The OPs is calculated through "the respective number of FLOPs adds {$\frac {1}{32}$, $\frac {1}{16}$, $\frac {1}{8}$} of the number of {$2$, $3$, $4$}-bit multiplications equals the OPs" following [3]. We will add detailed description in the final version.
[1] A Survey of Quantization Methods for Efficient Neural Network Inference. ArXiv: 2103.13630.
[2] Learned Step Size Quantization. ICLR'2020.
[3] Q-DETR: An Efficient Low-Bit Quantized Detection Transformer. CVPR'2023. | Summary: The paper proposes two method to mitigate the accuracy degradation caused by quantization of diffusion models: one is time-step aware quantization (different calibration data and range for each time-step of diffusion), the other is using a full-precision network for training time distillation. Experiments on image generation in Cifar-10 and ImageNet datasets show the efficacy of the method.
Strengths: By using QAT, the proposed method manages to bring down the bitwidth of the neural network to lower than 8-bit. The FID and IS metrics support the efficacy of the method.
Weaknesses: The benefits of per-time-step quantization has been known since earlier works like PTQ4DM and q-diffusion. It is desirable to know the proposed method and prior works. Note that QAT and PTQ do not make much difference here, as the underlying motivation of per-time-step quantization to deal with changing dynamic range change is the same here. Also having per-time-step quantization is almost zero-overhead as the weights need be loaded for computation of each step anyway.
Also having a full-precision network as teacher for knowledge distillation (including having full precision or higher precision teacher) is quite a standard approach. It's not obvious what specialties of QAT are incorporated here.
q-diffusion: https://arxiv.org/abs/2302.04304
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Can visual results be provided for intuitive inspection? In particular, spotty noise may not be well captured by FID/IS metrics but will be blatant when doing visual inspection.
In Table 2, there are a few pairs of results with similar IS but very different FID, like PTQ4DM CIFAR-10 32×32 50 steps and Baseline CIFAR-10 32×32 3/3 . Any discussions of this variation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: No particular limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: The benefits of per-time-step quantization has been known since earlier works like PTQ4DM [1] and Q-Diffusion [2]. It is desirable to know the proposed method and prior works. Note that QAT and PTQ do not make much difference here, as the underlying motivation of per-time-step quantization to deal with changing dynamic range change is the same here. Also having per-time-step quantization is almost zero-overhead as the weights need be loaded for computation of each step anyway.
**A1**: QAT is technically different from PTQ, and also QAT can achieve a better performance than PTQ as shown in our experimental parts. Previous PTQ methods for quantized DM, e.g., PTQ4DM [1], partially address the activation oscillation issue by constructing a calibration dataset, which is not applicable in the QAT process (see Fig. 4 in [3]). Differently, our method addressed this problem by introducing TaQ and NeM into the QAT framework, which can well reduce the activations oscillation and effectively improve the performance of Q-DM. We will add more related work to clarify the difference from our method in the final version.
**Q2**: Also having a full-precision network as teacher for knowledge distillation (including having full precision or higher precision teacher) is quite a standard approach. It's not obvious what specialties of QAT are incorporated here.
**A2**: Although using knowledge distillation is a standard approach in network quantization, the KD method for quantized DMs remains largely under-developed. As one of our contributions, we propose a new knowledge distillation method, called Noise-estimating Mimicking (NeM), to enhance the performance of quantized DMs. We provide the motivation and theoretical derivation of NeM in Sec. 4.2. As shown in Tab.1, the proposed NeM significantly improves the performance of the baseline method, which validates the motivation and theoretical derivation.
**Q3**: Can visual results be provided for intuitive inspection? In particular, spotty noise may not be well captured by FID/IS metrics but will be blatant when doing visual inspection.
**A3**: Thanks for the advice, we will provide qualitative results in the final version.
**Q4**: In Table 2, there are a few pairs of results with similar IS but very different FID, like PTQ4DM CIFAR-10 32×32 50 steps and Baseline CIFAR-10 32×32 3/3 . Any discussions of this variation?
**A4**: Generally, the IS metric has similar results as shown in our practice and also other methods [1,4], even they are very different on FID. For example, similar phenomenon can also be observed in PTQ4DM [1] (Table 4) and DDPM [4] (Table 1). We will add this discussion in the final version.
[1] Post-Training Quantization on Diffusion Models. CVPR'2023.
[2] Q-Diffusion: Quantizing Diffusion Models. ArXiv:2302.04304.
[3] A Survey of Quantization Methods for Efficient Neural Network Inference. ArXiv: 2103.13630.
[4] Denoising Diffusion Probabilistic Models. NeurIPS'2020. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper identifies two challenges in low-bit diffusion models (DMs): activation distribution oscillation and quantization error accumulation. To tackle these challenges, the paper introduces two novel techniques: Timestep-aware Quantization (TaQ) and Noise-estimating Mimicking (NeM). Experimental results demonstrate superior performance compared to previous approaches on CIFAR-10 and ImageNet 64x64 datasets.
Strengths: The paper is easy to follow and low-bit diffusion model is an interesting idea worth to try. The presented ablation studies demonstrate that the proposed methods have a positive impact on quantized diffusion models.
Weaknesses: 1. The main contribution of this paper involves the utilization of statistical mean and variance to address distribution oscillation. Essentially, this approach is akin to applying a shift and scale operation following quantized operations, which is a common technique employed in low-bit quantization to achieve balanced quantization bins. Similar methods for weight and activation balancing can be found in [1] and [2, 3] respectively.
2. In Equation (12), the application of TaQ to the softmax attention scores appears questionable. Unlike other activations that typically conform to a Gaussian or bell-shaped distribution, softmax attention scores often exhibit a long-tailed distribution, where the sum of the probabilities is 1. Normalizing the softmax attention scores may disrupt the probabilistic interpretation of these scores, which raises concerns about its appropriateness.
3. The proposed Noise-estimating Mimicking technique necessitates fine-tuning the entire model, which may pose practical challenges for large diffusion models trained on extensive datasets like LAION-5B. Efficient fine-tuning method should be considered.
4. The experiments in the paper are limited to CIFAR-10 and ImageNet 64x64 datasets, with a maximum model size of 4.47M. To provide a comprehensive evaluation, it is recommended to conduct additional experiments on larger models, such as LDM[4], and include commonly used datasets like LSUN and ImageNet 256x256.
5. The accuracy comparison experiment in this paper seems to be inadequate, and there are issues with the citations for the compared method PTQ4DM [5]. Moreover, discrepancies exist between the FID and IS results for full-precision (FP) models in this paper and the results reported in PTQ4DM, which could potentially introduce unfairness in the comparisons. As an example, when employing the DDIM sampler for 100 steps on CIFAR-10, the FID for the FP model is reported as 10.05 in PTQ4DM, whereas this paper indicates a result of 4.16.
[1] Qin, Haotong, et al. "Forward and backward information retention for accurate binary neural networks." CVPR 2020.
[2] Liu, Zechun, et al. "Reactnet: Towards precise binary neural network with generalized activation functions." ECCV 2020.
[3] Wei, Xiuying, et al. "Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling." arXiv:2304.09145 (2023).
[4] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." CVPR 2022.
[5] Shang, Yuzhang, et al. "Post-training quantization on diffusion models." CVPR 2023.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: How are the model size and OPs calculated? The quantized model's size reported in the paper is precisely 1/bits of the full precision model. However, this is unusual because there must be some parameters that remain in full-precision.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I did not identify any issues related to limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: The main contribution of this paper involves the utilization of statistical mean and variance to address distribution oscillation. Essentially, this approach is akin to applying a shift and scale operation following quantized operations, which is a common technique employed in low-bit quantization to achieve balanced quantization bins. Similar methods for weight and activation balancing can be found in [1] and [2, 3] respectively.
**A1**: To the best of our knowledge, our Q-DM is the first QAT-based work to effectively quantize diffusion models. Our method can well address the activation oscillation problem by introducing Timestep-aware Quantization (TaQ) and Noise-estimating Mimicking (NeM) into the QAT framework, which effectively improve the performance of quantizing DM. Different from quantization of other deep models, our Q-DM involves both training and inference process, in this situation our method is very efficient based on a simple method with the shift and scale operations. You can also refer to Q8 of Reviewer bfDz, where we also compare the performance of Q-DM with other normalization methods.
**Q2**: In Eq. (12), the application of TaQ to the softmax attention scores appears questionable. Unlike other activations that typically conform to a Gaussian or bell-shaped distribution, softmax attention scores often exhibit a long-tailed distribution, where the sum of the probabilities is 1. Normalizing the softmax attention scores may disrupt the probabilistic interpretation of these scores, which raises concerns about its appropriateness.
**A2**: We agree with your point on the Gaussian or bell-shaped distribution, our proposed TaQ method is exactly applied to $a_q$ , $a_k$ before the softmax operation, rather than directly normalizing the softmax attention score. Details are shown in the line 2 of Eq. (12). Therefore, Eq. (12) is appropriate and does not disrupt the distribution of attention score. We will visualize the effect of TaQ method on probabilistic interpretation of Q-AttnBlock in the final version for clarity.
**Q3**: The proposed Noise-estimating Mimicking technique necessitates fine-tuning the entire model, which may pose practical challenges for large diffusion models trained on extensive datasets like LAION-5B. Efficient fine-tuning method should be considered.
**A3**: We will add more fine-tuning method experiments in the final version. As shown in the table below, our method also effects on large diffusion models fine-tuned by LoRA-R8 on extensive datasets like LAION-5B. Our method consistently boosts the performance of baseline method in 2/3/4 bit-width format. We will add the experimental results in the final version.
| Model | Method |\#Bits| FID$\downarrow$ |
| ------ | ------ | ------ | ------ |
| LDM-4 | Full-precision | 32/32 |20.68|
| LDM-4 | Baseline (LSQ) | 4/4 |23.42|
| LDM-4 | Q-DM | 4/4 | 21.56 |
| LDM-4 | Baseline (LSQ) | 3/3 | 25.81 |
| LDM-4 | Q-DM | 3/3 | 23.72 |
| LDM-4 | Baseline (LSQ) | 2/2 | 27.49 |
| LDM-4 | Q-DM | 2/2 | 25.96 |
**Q4**: The experiments in the paper are limited to CIFAR-10 and ImageNet 64x64 datasets, with a maximum model size of 4.47M. To provide a comprehensive evaluation, it is recommended to conduct additional experiments on larger models, such as LDM [4], and include commonly used datasets like LSUN and ImageNet 256x256.
**A4**: We apply our Q-DM on LDM [4] and test it on the LSUN dataset, and the results are shown in the table below. As can be seen, the proposed Q-DM also show its superiority on the LDM with larger LSUN dataset. For example, Q-DM boosts the baseline models by about 0.3\%~0.5\% FID score with the same bit-width, which is significant. We will add these experiments in the final version.
| Model | Method |\#Bits| FID$\downarrow$ |
| ------ | ------ | ------ | ------ |
| LDM-4 | Full-precision | 32/32 | 2.98 |
| LDM-4 | Q-Diffusion |8/8 | 3.63 |
| LDM-4 | Baseline (LSQ) | 4/4 | 3.54 |
| LDM-4 | Q-DM | 4/4 | 3.01 |
| LDM-4 | Baseline (LSQ) | 3/3 | 3.68 |
| LDM-4 | Q-DM | 3/3 | 3.37 |
| LDM-4 | Baseline (LSQ) | 2/2 | 3.99 |
| LDM-4 | Q-DM | 2/2 | 3.76 |
**Q5**: The accuracy comparison experiment in this paper seems to be inadequate, and there are issues with the citations for the compared method PTQ4DM [5]. Moreover, discrepancies exist between the FID and IS results for full-precision (FP) models in this paper and the results reported in PTQ4DM, which could potentially introduce unfairness in the comparisons. As an example, when employing the DDIM sampler for 100 steps on CIFAR-10, the FID for the FP model is reported as 10.05 in PTQ4DM, whereas this paper indicates a result of 4.16.
**A5**: We follow the same experiments setting as DDPM [6] and DDIM [7]. The FID score of DDIM sampler for 100 steps on CIFAR-10 is reported as 4.16 in the original paper (Table 1, Page 7 in DDIM [7]), which is the same as our re-implementation.
[1] Forward and Backward Information Retention for Accurate Binary Neural Networks. CVPR'2020.
[2] ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions. ECCV'2020.
[3] Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling. ArXiv:2304.09145.
[4] High-Resolution Image Synthesis with Latent Diffusion Models. CVPR'2022.
[5] Post-Training Quantization on Diffusion Models. CVPR'2023.
[6] Denoising Diffusion Probabilistic Models. NeurIPS'2020.
[7] Denoising Diffusion Implicit Models. ICLR'2021.
---
Rebuttal Comment 1.1:
Title: Follow-up Feedback
Comment: Thank you for your response. However, I still have some questions:
1) Could you clarify the statement "Different from the quantization of other deep models, our Q-DM involves both the training and inference processes"? All methods outlined in [A-C] and the normalization layers are engaged in both the training and inference phases. Additionally, what is the key difference between the shift and scale operations you proposed and the techniques presented in the aforementioned methods?
2) Currently, even the most advanced QAT methods [D, E] experience significant accuracy loss when confronted with low bit-width scenarios (e.g., 2-bit). **None of the QAT methods in 2-bit quantization** manage to rival the performance of the 8-bit PTQ quantization. Considering that classification serves as a foundational task in computer vision, I wonder why your 2-bit Q-DM model's performance closely approaches that of the 8-bit Q-Diffusion (3.76 versus 3.63 on the LSUN dataset), a model that relies on an advanced reconstruction-based PTQ approach. Are there any undisclosed strategies that might have contributed to this result? If your method indeed excels in classification tasks under 2-bit quantization, its potential impact on the community would be substantial.
3) Since your method is QAT-based, the implementation details for training DMs are missing. For instance, how many epochs did you train? What optimizer and learning rate did you use? How many computing resources are required to train Q-DM?
4) For the experimental results in A3, did you evaluate LDM-4 on LAION-5B? How did you measure FID on LAION-5B? Did you use any pre-trained model?
5) In your rebuttal (to me and to other reviewers), a majority of the experimental results lack specific dataset details. For instance, there are several different datasets in LSUN, like bedrooms and churches.
[A] Forward and Backward Information Retention for Accurate Binary Neural Networks. CVPR 2020.
[B] ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions. ECCV 2020.
[C] Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling. ArXiv:2304.09145.
[D] Overcoming oscillations in quantization-aware training." International Conference on Machine Learning. PMLR 2022.
[E] Q-vit: Accurate and fully quantized low-bit vision transformer." NeurIPS 2022.
---
Reply to Comment 1.1.1:
Title: Response to Follow-up Feedback
Comment: We extend our heartfelt gratitude for your dedication in reviewing our paper.
We abridge the question into several parts and response point-to-point.
**Q.I**: Could you clarify the statement "Different from the quantization of other deep models, our Q-DM involves both the training and inference processes"? All methods outlined in [A-C] and the normalization layers are engaged in both the training and inference phases. Additionally, what is the key difference between the shift and scale operations you proposed and the techniques presented in the aforementioned methods?
**A.I**: Sorry for the confusion. We will change the statement as:"Our approach specifically addresses the issue of Activation Distribution Oscillation, which is observed in the Q-Attention Block and caused by diverse sampling timesteps of DMs. " These methods [A-C] perform on each layer of CNNs or language models with more additional parameters. Differently, our approach is targeted at rectifying activations within the Q-Attention Block, by adding on different positions (q and k) from [A-C] with fewer learnable parameters and different dimensions. We implement methods [B, C] into Quantized DMs by 50-step DDIM sampler with 32×32 generating resolution on CIFAR-10 dataset below. Note that the shift and scale operations in IR-Net [A] can not be used in the low-bit situation. We will add these comparison in the final version.
| Method | #Bits | FID$\downarrow$ | IS$\uparrow$ |
| -- | -- | -- | -- |
| tech. in ReActNet | 4/4 | 9.25 | 8.87 |
| tech. in Ourlier Suppression++ | 4/4 | 9.67 | 8.82 |
| Q-DM | 4/4 | 8.98 | 8.92 |
**Q.II**: Currently, even the most advanced QAT methods [D, E] experience significant accuracy loss when confronted with low bit-width scenarios (e.g., 2-bit). None of the QAT methods in 2-bit quantization manage to rival the performance of the 8-bit PTQ quantization. Considering that classification serves as a foundational task in computer vision, I wonder why your 2-bit Q-DM model's performance closely approaches that of the 8-bit Q-Diffusion (3.76 versus 3.63 on the LSUN dataset), a model that relies on an advanced reconstruction-based PTQ approach. Are there any undisclosed strategies that might have contributed to this result? If your method indeed excels in classification tasks under 2-bit quantization, its potential impact on the community would be substantial.
**A.II**: For experiments on LSUN-Bedrooms, we do not use any undisclosed strategies. The performance gap on bit-widths differs for different tasks, particularly for the image generation task, the gap seems to be smaller than others. We will add more detailed training and evaluating settings in the final version.
**Q.III**: Since your method is QAT-based, the implementation details for training DMs are missing. For instance, how many epochs did you train? What optimizer and learning rate did you use? How many computing resources are required to train Q-DM?
**A.III**: Sorry for the missing. On CIFAR-10 dataset, the training epoch and training time of DDPM and DDIM is 80k step, which needs 6 GPU days. More details can be referred to A4 of Reviewer bfDz. We will add these description and other necessary training settings in the final version.
**Q.IV**: For the experimental results in A3, did you evaluate LDM-4 on LAION-5B? How did you measure FID on LAION-5B? Did you use any pre-trained model?
**A.IV**: Due to the limitation of timeline, we randomly select 10k image-text pairs from LAION-5B for training and evaluating. The LDM-4 in A3 is evaluated on the selected data. We use a full-precision LDM-4 trained on the selected data as a pre-trained model. We will conduct experiments on the whole LAION-5B dataset and add necessary descriptions in the final version.
**Q.V**: In your rebuttal (to me and to other reviewers), a majority of the experimental results lack specific dataset details. For instance, there are several different datasets in LSUN, like bedrooms and churches.
**A.V**: Sorry for the missing. The experiments are conducted on LSUN-Bedrooms with a generation resolution of 256 x 256. | null | null | null | null | null | null |
SutraNets: Sub-series Autoregressive Networks for Long-Sequence, Probabilistic Forecasting | Accept (poster) | Summary: The paper "SutraNets: Sub-series Autoregressive Networks for Long-Sequence, Probabilistic Forecasting" propose to model uni-variate time-series with two interleaved networks, that model 'fine-grained' and 'coarse-grained' time-steps. This has the advantage of reducing signal paths, reducing inference error accumulation, enable parallel training, and sharing parameters potentially more efficiently. The paper investigates 5 additional architectures on 6 standard datasets (e.g. electricity, but also MNIST), and demonstrates superior performance over the baselines.
Strengths: + the paper is well and very clearly written (including the literature review, methods, Figures etc.)
+ the core idea of the paper makes intuitive sense, and the paper proposes several good ideas how to extend the standard autoregressive architecture, and investigates their impact on the final performance.
+ the method is shown to improve performance over baselines
Weaknesses: mainly minor things:
- Figure 4: it's a bit unclear what is meant with 'more confident' - I think it is probably meant that it is less blurry? I'd be great to mention this explicitly if this is the case. Also, it'd be great to somehow quantify these findings: what are the likelihoods on the hold-out test data in the different settings? Maybe in this case (MNIST), also image-based evaluations could be performed (e.g. FID).
- Figure 6: "per 100" in the figure itself. I think it'd be better to re-do the Figure with "inference time per forecast" and then put it there directly - otherwise it lacks a unit.
- There is some prior work on multi-variate forecasting that has some similarity to this paper, but hasn't been mentioned. E.g. [1] (please also check reference & citations of this work), though focussing on the full multi-variate time-series problem, do also use an 'inner' architecture to generate data, and then have an outer autoregressive model. These should be mentioned in the related literature.
[1] Rasul et al 2021. Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - in eq.2 the conditioning is on x^k_{1:T+N}, i.e. y_{\tile t} is conditioned on co-variates from the future. I assume this should be x^k_{1:\tile t} instead?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: + The paper has a good limitations section, which covers the limitations. E.g. the method being more complex, and requiring additional tuning for better performance. Also the societal impacts are considered in this section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your helpful and thoughtful review, and your positive comments regarding the paper's core idea, clarity of presentation, and thorough evaluation.
### Regarding concept of "confidence" in Figure 4
> it's a bit unclear what is meant with 'more confident' - I think it is probably meant that it is less blurry? I'd be great to mention this explicitly if this is the case. Also, it'd be great to somehow quantify these findings: what are the likelihoods on the hold-out test data in the different settings? Maybe in this case (MNIST), also image-based evaluations could be performed (e.g. FID).
Yes, this is a good point. We should have clarified that by "more confident", we meant that at a given percentile of the forecast distribution (e.g., p50% = 50% confidence), the bottom parts of the digit have more non-zero values, implying the networks have more certainty in the digit being generated. While we compare model performance on MNIST using standard forecasting metrics in Table 1, we like the idea of computing likelihood and image-based evaluations for this dataset, and will include these in the final paper.
### Regarding Figure 6
> "per 100" in the figure itself. I think it'd be better to re-do the Figure with "inference time per forecast" and then put it there directly - otherwise it lacks a unit.
Agreed. We will change this.
### Regarding prior probabilistic forecasting models
> There is some prior work on multi-variate forecasting that has some similarity to this paper, but hasn't been mentioned. E.g. [1] (please also check reference & citations of this work), though focussing on the full multi-variate time-series problem, do also use an 'inner' architecture to generate data, and then have an outer autoregressive model. These should be mentioned in the related literature.
Also a good point. In response to your comment and also suggestions from Reviewer 6VCF, we will revise the related work to add an initial subsection on "Probabilistic autoregressive forecasting", and we will note that SutraNets are fully *compatible* with prior techniques to estimate/sample an output probability distribution at each step. For example, SutraNets can serve as the RNN model both with the AR denoising diffusion model - as box RNN in Figure 1 in [Rasul et al., 2021a](https://arxiv.org/abs/2101.12072), or with the AR conditioned normalizing flows - as box RNN in Figure 1 in [Rasul et al., 2021b](https://arxiv.org/abs/2002.06103). While these papers focus on short-term multivariate forecasting, there is no reason these estimators could not also be applied to long-term forecasting, where SutraNets would likely prove very helpful as the sequential backbone.
Moreover, while our paper shows that SutraNets improve the accuracy of C2FAR ([Bergsma et al., 2022](https://openreview.net/forum?id=lHuPdoHBxbg)), we should definitely have clarified that C2FAR is a state-of-the-art distribution estimator, and was previously shown to itself improve over DeepAR ([Salinas et al., 2020](https://www.sciencedirect.com/science/article/pii/S0169207019301888)), DeepAR-binned ([Rabanser et al., 2020](https://arxiv.org/abs/2005.10111)), SQF-RNN ([Gasthaus et al., 2019](https://proceedings.mlr.press/v89/gasthaus19a.html)), and IQN-RNN ([Gouttes et al., 2021](https://arxiv.org/abs/2107.03743)) - all essentially different methods for distribution estimation that operate on top of a standard sequence model. In our paper's evaluation, we went deep in testing C2FAR alongside various enhancements to the core sequence model, such as adding lags, dropout, and frequency hierarchies. However, it would certainly help our case to go broader and show that SutraNets also enhance other methods for distribution estimation, and we will pursue such experiments in advance of the camera-ready deadline.
### Regarding conditioning on covariates
> in eq.2 the conditioning is on $x^k_{1:T+N}$, i.e. $y_{\\tilde{t}}$ is conditioned on co-variates from the future. I assume this should be $x^k_{1:\\tilde{t}}$ instead?
Yes, we should clarify this. Since covariates are known *a priori*, mathematically there is no reason not to condition on covariates over all timesteps. However, in prior forecasting models such as DeepAR ([Salinas et al., 2020](https://www.sciencedirect.com/science/article/pii/S0169207019301888), see Eq. (2)), it is common to define the RNN as consuming the current covariates alongside the prior observation at each step, so the output is implicitly only conditioned on covariates at $\\le t$, which, as you have noted, is also what we do (see our Eq. (3) and definition of RNN in line 131).
FYI, in practice, ignoring future covariates is not really a limitation, as we can always introduce new covariates at time $t$ that identify future covariates, e.g., they may say something like, "there is a sale on this product beginning at time $t+1$". In prior work, most covariates are either static across the whole prediction range (e.g., a product ID), or provide timestep-specific information (e.g., the hour-of-the-day or day-of-the-week of that particular timestep).
We will revise the paper to make it clear when we are dropping conditioning on all covariates, to condition on only those up to time $\\tilde{t}$, and explain the rationale for this as noted above. See also our similar response to Reviewer 7ywN on this topic.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: I thank the authors for their diligent efforts in addressing the concerns raised in my initial review.
The minor concerns I had initially identified have been adequately discussed in the authors' rebuttal. At this point, I do not feel to have further points to discuss with the authors. | Summary: The authors propose new ways execute the recurrence in auto-regressive forecasting models. The approach is practical and shouldn't be too complicated to implement for a forecasting practitioner. Experiments show performance uplifts on common real and toy datasets.
Strengths: The approach is interesting and practical, although the motivation appear a bit heuristically. One important advantage that is mentioned but not emphasized much, is that due to the lagged recurrence, roughly $K$-fold parallel processing can be achieved during training, which can be rather significant for RNN models on long sequences.
The paper is well organized, clearly written and easy to follow. The background section is nice and informative and provides a good overview over related work.
Weaknesses: The authors explore different interleaved recurrence orders, but there is more to the topic that could strengthen the paper. For example, how can the performance be attributed to different recurrence order on the one hand, and different hidden state dynamics on the other hand? As an example Fig 2(a) and (c) have the same recurrence order, but different hidden state dynamics. Accordingly Backfill-alt Fig 2(f) could perhaps perform even better with the straight hidden state dynamics of a standard RNN.
Another analysis I'd love to see is how the conditional distributions $p(y_t| ...)$ look like for the different methods. Is there a recurrence order where these are mostly Gaussian or have another preferable simple shape? Take for example $y_1\sim\mathcal N(0,\sigma_1)$ and $y_2\sim\mathcal N(y_1,\sigma_2)$, then both $p(y_1)$ and $p(y_2|y_1)$ are Gaussian, but for $p(y_2)$ and $p(y_1|y_2)$ the later has a complicated bimodal form. Similar effects may impact the performance of auto regressive models in general, and it would be interesting to see that analyzed for the proposed models.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The equation (2) is incorrect: the right hand side terms must condition on $\mathbf x_{1:T+N}$ not $\mathbf x^k_{1:\tilde T+\tilde N}$ in order for both sides to be equal. That has implications for the RNN formulation in l131 where in the current form the RNN state $h^k_\tilde t$ does not depend on any $\mathbf x_t$ with $t>K\tilde t$ effectively, while in equation (1) it does.
* Equation (2) and the inline formula in l131 describe the Regular-alt case. It would be helpful to have these explicitly formulated also for the other 3 cases, so please add them. That could save space between l135 and l146
* l143-145: *"When not conditioning on > k values, SutraNets can instead generate the complete prediction range of the kth sub-series before generating any predictions for > k ones"*. I think that's not correct. When sampling according to xxx-non schema, the RNN has to be conditioned on $h_{\tilde T+\tilde N}^{<k}$ in order to get (wrapped up) information about the whole past sub-series $y_{\tilde T:\tilde T+\tilde N}^{<k}$ and not just $y_{\tilde t}^{<k}$. This may have implications to the experiments and currently impose an unfair disadvantage to the xxx-non methods.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: limitations are properly addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your detailed and constructive review, and your kind words regarding the advantages of the approach and the paper's organization, clarity, and overview of related work.
### Regarding parallelization of training
> One important advantage that is mentioned but not emphasized much, is ... roughly $K$-fold parallel processing can be achieved during training, which can be rather significant for RNN models on long sequences
This is a great point: we will definitely emphasize this advantage in the intro, and contrast SutraNets to prior work along these lines. For example, see the proposed new summary table (Table 0) in our response to Reviewer 6VCF.
### Regarding disentangling the effects of generative order vs. hidden state dynamics
> Backfill-alt Fig 2(f) could perhaps perform even better with the straight hidden state dynamics of a standard RNN
So basically have a single evolving RNN hidden state, but step through the time series in backfill order in sub-sequences of $K$ consecutive values (akin to reading a document downwards but from right-to-left on each line). Essentially, a version of 2(f) where the state arrows always connect the output in row $i$ to the output in row $i+1$, rather than connecting states within sub-series. Since signal path would be high, and error accumulation lower, we agree this would help disentangle the effects of generative order vs. hidden state dynamics. Also, what is great is that this simply requires a kind of re-shuffling of the values in the conditioning and generation windows (i.e., a simple pre-processing step) before applying any standard sequence model. We will definitely pursue this further - great idea!
### Regarding visualizing the conditional distributions for the different methods
> Another analysis I'd love to see is how the conditional distributions look like for the different methods
This is also a good suggestion, as it may help shed light on *why* certain orderings work better. We have the machinery to plot the distributions, we just need to think further about how to systematically and objectively analyze these plots.
### Regarding conditioning on covariates
> The equation (2) is incorrect: the right hand side terms must condition on $x_{1:T+N}$ not $x^k_{1:\\tilde{T}+\\tilde{N}}$ in order for both sides to be equal. That has implications for the RNN formulation in l131
Yes, good catch: here we break strict equality. While mathematically there is no reason not to condition on *all* the covariates, in prior forecasting models such as DeepAR ([Salinas et al., 2020](https://www.sciencedirect.com/science/article/pii/S0169207019301888), see Eq. (2)), it is common for the RNN to consume the current covariates alongside the prior observation at each step, so the output is implicitly only conditioned on covariates at $\\le t$. We will fix this by conditioning on all the covariates in Eq. (2) (with strict equality), and then using approximate equality in Eq. (3) when we introduce the RNN, which consumes sub-series covariates. We will also provide more explanation.
Regarding the implications for the covariates input to each sub-series RNN: note that in practice, ignoring future covariates is not really a limitation, as we can always introduce new covariates at time $t$ that identify future covariates, e.g., they may say something like, "there is a sale on this product beginning at time $t+1$". Moreover, in prior work, most covariates are either static across the whole prediction range (e.g., a product ID), or provide timestep-specific information (e.g., the hour-of-the-day or day-of-the-week of that particular timestep).
### Regarding full equations for all generative orderings
> Equation (2) and the inline formula in l131 describe the Regular-alt case. It would be helpful to have these explicitly formulated also for the other 3 cases
Yes, we can do this. For the benefit of other reviewers, we note the non-alternating formulas arise when removing the purple $y^{>k}_{1:\\tilde{t}-1}$ as a conditioning value.
### Regarding non-alternating models
> "When not conditioning on $>k$ values, SutraNets can instead generate the complete prediction range of the $k$th sub-series before generating any predictions for $>k$ ones". I think that's not correct. When sampling according to xxx-non schema, the RNN has to be conditioned on $h_{\\tilde{T}+\\tilde{N}}^{<k}$ in order to get (wrapped up) information about the whole past sub-series ... This may have implications to the experiments and currently impose an unfair disadvantage to the xxx-non methods.
We should clarify that the $k$th sub-series RNN does not condition on *hidden states* from other sub-series RNNs, rather only on *values* from other sub-series, which the $k$th sub-series RNN consumes at each timestep along with its own prior values, as illustrated in Figure 2. Each sub-series RNN thus gets "wrapped up" information about the other sub-series in its own hidden state, but only *up to the current timestep* - and, for non-alternating models, only for sub-series with index $<k$. Furthermore, in Figure 2(d), note that when generating y5 in row 7, we *could* theoretically condition on y6 and y7 at this point, but we do not. In practice, this seems to impair the performance of Regular-non (as noted in Footnote 3 on page 7), but not Backfill-non, where all the proximal values (before and after the value being generated) *are* conditioned on (input either at the current or at the previous timestep) - we will expand Footnote 3 to explicitly contrast the regular-non and backfill-non situations.
Now, it is not clear that ignoring distant future values is a particular disadvantage to the non-alternating models, since the alternating models (and standard RNNs) also only condition on values generated up to and including the current timestep, however it would be informative to us if you see it differently.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their rebuttal. I think my concerns where addressed and suggestions followed, but no major step changes were made to the paper. My initial rating seems to resonate with the other reviewers, so I'm gonna stick with it. | Summary: The author proposed SutraNets, a novel method for neural probabilistic forecasting of long-sequence time series. It addresses challenges faced by previous autoregressive approaches, such as error accumulation and modeling long-distance dependencies. SutraNets treat long predictions as multivariate predictions over lower-frequency sub-series, reducing errors and signal path distances. The authors conduct extensive experimental results on real-world datasets, and they show significant improvements.
Strengths: 1. The sub-series in long-sequence analysis have seldom been investigated, especially for RNN-type networks.
2. The paper is well-organized, and the discussion is very clear.
Weaknesses: 1. What is the deviation point of high and low frequency? What is the connection between traditional signal analysis?
2. The performance improvement seems moderate. Could the author include std for complete?
3. Finding 4 could include more recent SOTA methods.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Could the author discuss more about this work compared with pathTST (2023)?
2. See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your very useful comments, and for your kind words regarding the paper's key idea, organization, and clarity.
### Regarding selection of high and low frequencies
> What is the deviation point of high and low frequency? What is the connection between traditional signal analysis?
This is a good point: we did not describe how we chose the frequencies. While the high frequency is implicitly the sampling frequency of the time-series, the low frequency (both for SutraNets and for the Low2HighFreq baseline) follows from the setting of $K$, which we basically viewed as a kind of hyperparameter to be set "heuristically" (line 201). We will clarify in the methods section that $K$ is typically chosen such that it divides evenly into the primary seasonal period (i.e., the low frequency is twice or four times per day for series with primarily daily seasonality). We will also gratefully use your idea and note in the paper that techniques such as autocorrelation and Fourier analysis can reveal the primary seasonality, and hence inform the choice of $K$, in cases where such information is not known *a priori*.
Note also we still found a strong benefit of SutraNets over baselines even when $K$ does not divide evenly into the seasonal period, particularly for the alternating generative order. See Finding 5 (line 283) and Table 3 for more details.
### Regarding error bars for experimental results
> The performance improvement seems moderate. Could the author include std for complete?
Yes, note we do study the (very consistent) stability of improvement in supplemental section B.2, in particular supplemental Figures 3 and 4, and we noted the overall finding in easy-to-miss Footnote 2 of the main paper. We should promote Footnote 2 to be part of the main paper discussion and expand on our findings there.
### Regarding recent "deeper" SOTA methods
> Finding 4 [Deeper models enable improved long-sequence forecasting] could include more recent SOTA methods.
This is a very good point. Beyond strengthening our finding, it would be of significant practical and theoretical importance to know if handling of long-term information can be improved in recent SOTA methods by simply increasing the depth of the underlying neural network. We will definitely pursue this direction, thanks!
### Regarding relationship to PatchTST
> Could the author discuss more about this work compared with pathTST (2023)?
Yes, this paper is definitely worth adding to our sub-section "Modeling long-term dependencies" (and to a proposed new summary table, see Table 0 in our response to Reviewer 6VCF). We will discuss PatchTST as follows:
> In contrast to SutraNets, which operate over sub-series of the original input (i.e., points spaced $K$ apart), PatchTST groups $K$ *consecutive* values into input *patches*. While patching was proposed as a method to reduce the attentional complexity of *Transformers*, patching could also be used with RNNs, where it would effectively reduce the signal path by a factor of $K$ (taking in $K$ inputs each step, with $K$ times fewer inputs overall). However, PatchTST does not provide a mechanism to probabilistically *generate* patches, rather it directly outputs non-patched point predictions at all timesteps in one shot. | Summary: This manuscript proposes SutraNets for long-range probabilistic forecasting on time series data and pixel sequences.
SutraNets is a type of recurrent neural network that transforms a long series into a collection of shorter sub-series. Sub-series forecasts are generated autoregressively, sequentially conditional on each other, enabling coherent outputs.
Strengths: Figure 2 presents the generation ordering in SutraNets well, making it easy to follow.
Weaknesses: Since this work focuses on probabilistic forecasting, it should include related work and baselines on time series probabilistic forecasting models.
However, most related baselines are missing. For example:
- NeurIPS’21 Probabilistic Transformer for Time Series Analysis
- ICLR’21 Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows
- ICML’21 Autoregressive Denoising Diffusion Models for Multivariate Probabilistic Time Series Forecasting
“long-sequence” in the title means to predict a long sequence, it would be better to compare recent long-range time series prediction models. This is because the main work in this paper is the designed recurrent structure, rather than a new way to estimate the uncertainty. In other words, the connection between the proposed model and the probabilistic modeling is weak.
Moreover, I believe that the structure of SutraNets is quite similar to recurrent neural networks with skip or residual connections in the temporal direction. And it does not depend on the K. It is unclear what advantages do SutraNets have over RNNs with skip connections. The main differences between SutraNets and the following works should be discussed.
- NIPS’17 Dilated Recurrent Neural Networks
- NIPS’16 Architectural Complexity Measures of Recurrent Neural Networks
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: see weaknesses
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations and broader impacts have been discussed in Section 3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your helpful and insightful review. Your pointers to related work will enable us to substantially improve the paper. It is also gratifying to know that Figure 2 was valuable and contributed to the overall good presentation.
### Regarding prior probabilistic forecasting models
> Since this work focuses on probabilistic forecasting, it should include related work and baselines on time series probabilistic forecasting
Please refer to the section "Regarding prior probabilistic forecasting models" in our rebuttal to Reviewer BQ6D for full details on how we will use your suggestions to address this.
### Regarding [Tang & Matteson, 2021](https://proceedings.neurips.cc/paper/2021/hash/c68bd9055776bf38d8fc43c0ed283678-Abstract.html): Probabilistic Transformer for Time Series Analysis
This approach is worth mentioning in our Section 2 sub-section "Reducing training/inference discrepancy", as it is non-autoregressive, but still probabilistic. However, since the predicted outputs do not condition on each other (see their Eq. 2), the output values would not be generated coherently. Without modeling local dependencies among outputs, the model can only provide a very weak assessment of whether a given sequence is likely, e.g., for anomaly detection, interpolating missing values, etc. As mentioned, our goals are more aligned with flexible autoregressive models like GPT3, where not modeling local dependencies would result in, e.g., ChatGPT generating incoherent responses.
### Regarding the connection to probabilistic modeling
> the main work in this paper is the designed recurrent structure, rather than a new way to estimate the uncertainty... the connection between the proposed model and probabilistic modeling is weak
We will clarify in our intro that, in our use case, quantifying the variance in a predicted output is not a "nice-to-have", but essential for downstream decision makers that use our predictions to make risk-aware resource allocation and capacity planning decisions. Computing a full likelihood of a long time series also facilitates other important use cases such as anomaly detection, missing value interpolation, etc. We thus view SutraNets as fundamentally a novel way to compute a joint probability of very long time series forecasts, as opposed to an RNN-specific enhancement. Indeed, you can apply the SutraNet factorization to both LSTMs and Transformers (see our response to reviewer iCD8). While better capturing long-term dependencies *is* one of the advantages of SutraNets in RNNs (and could be applied more broadly), it goes along with reducing error accumulation in autoregression, enabling a $K$-fold improvement in training parallelism, and enabling coherent outputs - advantages that are simultaneously lacking in other proposed probabilistic models, as we shall discuss next.
### Regarding advantages of SutraNets over other RNNs
> It is unclear what advantages do SutraNets have over RNNs with skip connections
This is valuable feedback. We will unify the discussion of lagged/residual inputs (lines 84-85), and skip connections between LSTM states (categorized as multi-rate recurrence layers in lines 64-65). We will also add a summary section to the related work, which will include the following new table:
| RNN method | Meaning of K | RNN signal path | Max generative stride without feedback | RNN training parallelism | Coherent outputs |
|-|-|-|-|-|-|
| Standard RNNs, e.g. LSTMs ([Hochreiter & Schmidhuber, 1997](https://dl.acm.org/doi/10.1162/neco.1997.9.8.1735)) | N/A | $\\mathcal{O}(N)$ | $\\mathcal{O}(1)$ | $\\mathcal{O}(1)$ | Yes |
| Skip connections ([Zhang et al., 2016](https://proceedings.neurips.cc/paper/2016/hash/860320be12a1c050cd7731794e231bd3-Abstract.html)), lagged/residual inputs ([He et al., 2016](https://arxiv.org/abs/1512.03385)) | Skip/lag amount | $\\mathcal{O}(N/K)$ | $\\mathcal{O}(1)$ | $\\mathcal{O}(1)$ | Yes |
| PatchTST (as RNN) ([Nie et al., 2023](https://arxiv.org/abs/2211.14730)) | Size/stride of patches | $\\mathcal{O}(N/K)$ | N/A | $\\mathcal{O}(1)$ | No |
| DilatedRNN ([Chang et al., 2017](https://arxiv.org/abs/1710.02224)) (dilations from $C \\ldots K$) | Highest dilation amount | $\mathcal{O}(N/K)$ | $\\mathcal{O}(C)$ | $\\mathcal{O}(C)$ | Only if $C=1$ |
| SutraNets | Number of sub-series | $\\mathcal{O}(N/K)$ | $\\mathcal{O}(K)$ | $\\mathcal{O}(K)$ | Yes |
> Table 0: Advantages of SutraNets over prior RNNs: prior methods can reduce signal path, but not all methods improve error feedback nor facilitate training parallelism. Patching, as used in PatchTST, can reduce signal path by grouping consecutive elements into "patches", but it does not provide probabilistic outputs. Dilated RNNs with minimum dilation amounts $>1$ do facilitate error reduction and parallel training, but at the cost of sacrificing coherency, as the model is then "equivalent to multiple shared-weight networks, each working on partial inputs" ([Chang et al., 2017, Section 4.4](https://arxiv.org/abs/1710.02224)). SutraNets reduce signal path distances while simultaneously reducing error accumulation, enabling training parallelism, and generating coherent outputs. Experimentally, SutraNets perform better than standard LSTMs and LSTMs with lagged/residual inputs.
We will also add a discussion of Dilated RNN to sub-section "Modeling long-term dependencies", where we will provide the example of a time series that spikes to a high value at the very final historical (conditioning) input. In a Dilated RNN (with $C>1$), some of the outputs will be generated conditional on this spike, while others will be generated completely independently of the spike, and independent of other outputs that were aware of the spike, resulting in highly-incoherent output. The Dilated RNN paper mitigates this for point predictions by adding a final "fusion layer", but such an approach is not compatible with probabilistic forecasts.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
After carefully reading your response and re-evaluating the manuscript, I still have the following questions:
I believe the main contribution of this work lies in the long-sequence decoding strategy (Backfill-alt is suggested) for autoregressive models, which aims to reduce error accumulation (mentioned in the abstract).
However, it is important to note that this strategy is still limited.
For instance, in Figure 1, the approach uses (y0, y3, y6, y9) to generate (y12, y15, y18) initially, and then employs (y12, y15, y18) along with the history (y1, y4, y7, y10) to generate (y13, y16, y19).
This raises some questions:
- I noticed that y12, y15, and y18 are directly included in the final output, even though they were generated using only (y0, y3, y6, and y9) without considering the recent values y10 and y11. **If (y12, y15, y18) turn out to be incorrect predictions, would these errors accumulate and affect the subsequent predictions of (y13, y16, y19)?**
- Why wasn't (y10, y11) used to generate y12? It seems that predicting y12 using (y10, y11) would be much easier (short-term prediction).
- Moreover, since y12 is decoded using y0, y3, y6, and y9, **does it imply that the model assumes the time series to be periodic**? (The time series depicted in Figure 3 appears to be relatively straightforward to predict using existing long-term prediction methods.)
There is a related work TimesNet [1], which shares a similar idea to SutraNets. Both approaches transform long time series into multiple subseries. Considering this, I have another concern regarding **why the authors did not compare their proposed model with recent long-time series prediction methods such as [1]**. This question was not addressed in their response.
[1] [Timesnet: Temporal 2d-variation modeling for general time series analysis](https://arxiv.org/abs/2210.02186)
I believe it would be relatively straightforward to adapt long-time prediction methods for probabilistic forecasting or to evaluate SutraNets in the context of non-probabilistic forecasting. This would be valuable in illustrating the prowess of SutraNet in long-sequence prediction.
Given the limited evaluation presented and the absence of baselines for long-time series prediction methods, I remain uncertain about the true effectiveness of the proposed model in **long sequence** prediction.
---
Reply to Comment 1.1.1:
Comment: We really appreciate that you have both helped improve the paper, and are now re-evaluating the manuscript in light of our work together.
> the main contribution of this work lies in the long-sequence decoding strategy ... which aims to reduce error accumulation
Based on reviewer feedback, we will definitely revise the abstract to emphasize the novel benefit of SutraNets is the ability to simultaneously: (1) capture long-term dependencies by reducing signal path, (2) reduce error accumulation in autoregression, (3) enable a K-fold improvement in training parallelism for RNNs, and (4) generate coherent outputs.
> in Figure 1, the approach uses (y0, y3, y6, y9) to generate (y12, y15, y18) initially, and then employs (y12, y15, y18) along with the history (y1, y4, y7, y10) to generate (y13, y16, y19)
We understand you to be extrapolating the process in Fig. 1(d), for **Regular-non**. Indeed, it is a *non*-alternating model: the first sub-series prediction is generated end-to-end while only using historical values for that subseries.
> If (y12, y15, y18) turn out to be incorrect predictions, would these errors accumulate and affect the subsequent predictions of (y13, y16, y19)?
Errors do still accumulate, but to a lesser extent. Consider generating y19. In a standard RNN (Fig. 1a), we must generate y11, y12, y13, y14, y15, y16, y17, y18 - 8 steps - in order to generate y18, i.e., the value that precedes y19. However, in Regular-non, we can generate y13, 15, y18 - 3 steps - in order to generate y18. So the prediction of y19 can be based on proximal values (y18) themselves generated with $K$X less error accumulation.
> Why wasn't (y10, y11) used to generate y12? It seems predicting y12 using (y10, y11) would be much easier
Regular-non by definition generates y12 using y9 and earlier, while standard RNNs (Fig. 1a) and Regular-alt (Fig. 1c) generate sequentially as you are suggesting. Yet interestingly, Regular-non sometimes works better than Regular-alt in experiments. Why? Well, During *training*, sequential RNNs sometimes find it effective to just repeat the previous gold/true value, i.e., to predict y12 based on gold y11. But in *inference*, if y11 is generated after a long chain of error accumulation, repeating y11 to predict y12 is problematic. By forcing the network to attend to values $K$ steps in the past, we reduce train/test "discrepancy" and improve forecasting accuracy.
> since y12 is decoded using y0, y3, y6, and y9, does it imply that the model assumes the time series to be periodic?
SutraNets do not require data to be periodic to be effective, e.g., dataset `mnist`$^{\pi}$ comprises images after a fixed random permutation, and `azure` and `wiki` comprise many trending but non-seasonal series. However, for time series that *are* periodic, we asked a similar question to yours: does it matter whether y0, y3, y6, etc. captures seasonality, i.e., does it matter whether $K$ divides into the seasonal period? The results are in Table 3 and Finding 5: non-alternating models are comparatively weaker than alternating versions.
> it is important to note that this strategy is still limited
While *non-alternating* RNNs generate future values without "seeing" all the history, keep in mind that **all the alternating RNNs do see all the history in their generative process**. That is, the orange RNN in Fig. 1(f) will have seen y0, y1, y2, ..., y7, y8 when it goes to generate y11. True, it does not see y9 or y10, but that's the point: all values from y9 onward must be generated, and Backfill-alt generates y11, then y10, then y9. Since this seems to work quite well, we would like to share these findings with the community.
> There is a related work TimesNet [1], which shares a similar idea to SutraNets. Both approaches transform long time series into multiple subseries
We can certainly include discussion of TimesNet. It is interesting that TimesNet transforms univariate time series into 2D tensors of *multiple* periods, which are then processed via Inception-style 2D kernels (i.e., now using a 2D computer vision backbone, and outputting point predictions). SutraNets on the other hand transform time series into $K$ sub-series of the *same* period, and proposes an autoregressive factorization (with different generative orderings) for generating these sub-series using their own and previous values (with a 1D sequential generative model that produces probabilistic outputs).
Title: Reply to Official Comment by Reviewer 6VCF (1/2) | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a novel method for probabilistic forecasting of long-sequence time series. It uses an autoregressive generative model to factorize the likelihood of long sequences into products of conditional probabilities. The proposed model SutraNets treat long and univariate prediction as multivariate prediction over lower-frequency sub-series to effectively reduce error accumulation. Experimental results show improved forecasting accuracy on a variety of datasets.
Strengths: 1. It's novel to convert a univariate series into a multivariate series, each dimension comprising sub-series of the original sequence.
2. It's interesting that SutraNet model generates each sub-series conditional on both its own prior values and on other sub-series.
Weaknesses: The proposed model is only applied to LSTM based model, while many of the state-of-the-art time series forecasting model is Transformer based. Not sure whether it can be integrated into Transformer based models.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Could this approach be applied to Transformer based models?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations are mentioned in Section 3. However, it doesn't show the limitations to extend it to Transformer based models which are very popular in time-series forecasting recently.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful feedback, and your support for the paper's core idea.
You raise an important point regarding Transformers: we did not spend enough time describing how SutraNets could be applied to Transformers. While we did mention in line 45 that SutraNets could be applied to Transformers, in the methods section we only briefly noted (lines 162-163) how attending to SutraNet sub-series could lower Transformer complexity.
We will add a new sub-section to Section 3:
> **Application to Transformers**
> SutraNets can also be applied with autoregressive Transformer models ([Li et al., 2009](https://arxiv.org/abs/1907.00235)). Rather than using $K$ separate RNNs to parameterize the conditional probabilities in Eq. (3), we can use $K$ separate Transformers. Compared to a single standard Transformer, using Backfill and non-alternating SutraNet Transformers would have the advantage of reducing *error accumulation* (by forcing the network to predict without immediately-preceding values), but not of *signal path*, which is already $\\mathcal{O}(1)$ for standard Transformers ([Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)). Moreover, at each timestep, a SutraNet Transformer can attend over all prior values, limited only by the generative order. In a naive implementation, alternating SutraNet models could therefore attend to essentially all prior historical values, resulting in an asymptotic complexity of $O(N^2)$ - the same as standard Transformers. However, we can achieve $\\mathcal{O}(N^2/K)$ attentional complexity by restricting each sub-series Transformer to only attend to values from its own sub-series, plus a small number of proximal values from other sub-series (similar to strided or banded self-attention ([Child et al., 2019](https://arxiv.org/abs/1904.10509), [Brown et al., 2020](https://arxiv.org/abs/2005.14165))). Although asymptotically larger than the $\\mathcal{O}(N^2/K^2)$ complexity of patching (Nie et al., 2023), or the $\\mathcal{O}(N (\\log N)^2)$ of LogSparse attention ([Li et al., 2009](https://arxiv.org/abs/1907.00235)), it merits further investigation to determine which approach results in the most favorable accuracy vs. complexity trade-off in practice.
In the limitations section, we will also mention that SutraNets will not improve (nor harm) the parallelization of training in Transformers, since Transformers are already fully parallelizable across the sequence.
Finally, we will do a better job of motivating why we evaluated SutraNets in the context of RNN-based forecasting in the first place. Notably, our production forecasting system, like many others in industrial settings and cloud services, (e.g., via Amazon Forecast), is based on RNNs. As our paper shows, SutraNets offer major benefits to such RNN-based systems, as SutraNets are the first approach to advance all three key dimensions of signal path, error accumulation, and parallelization of RNN training.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for authors on addressing my initial concerns. However, my main concern is in recent 2-3 years, latest top-tier conference, other backbone models have far more superior performance than RNN based model. Even for probabilistic forecasting, it's very easy to adapt current transformer based model in a probabilistic fashion. Please refer to [1] section 4.4 (appendix table 9) for the results with various Transformer based backbone model. Could the results on this paper beat the results shown the appendix table 9?
[1]Shabani, Amin, et al. "Scaleformer: iterative multi-scale refining transformers for time series forecasting." ICLR2023
---
Reply to Comment 1.1.1:
Comment: Thank you very much for engaging in this discussion and for suggesting the insightful comparison with Scaleformer. We absolutely do understand your concern. While our new paragraph above clarifies that Sutranets can be applied to Transformers, it is definitely not certain that the improvements that we demonstrated will generalize to Transformers. That being said, I am sure you would agree that NeurIPS may still include papers demonstrating advances in RNNs. As you know, RNNs are more biologically-plausible than Transformers (maintaining and updating a memory-like state), and, unlike Transformers, which must use a fixed-size context window, RNNs can theoretically condition on context of any length.
This latter point is especially relevant to comparisons with Scaleformer ([Shabani et al, 2023](https://arxiv.org/abs/2206.04038)). In Table 9 of Shabani et al. (and in all their results), note the underlying Transformer is restricted to 96 historical values, i.e., only **4 days** of hourly data (Section 4.1: "the look-back window size is fixed to 96, and the horizon is varied from 96 to 720"). Yet also note the `traffic` dataset (and to a lesser extent, `electricity`) has extremely strong **weekly** seasonality - i.e., a future value is accurately predicted by the value 168 steps (hours) in the past. Since the implemented Scaleformer cannot "see" these values from one week ago, it makes sense that it performs much worse than both SutraNets, *and* the seasonal-naive-1week baseline (see below). (One may also qualitatively compare the results in their Figure 9 to our Figure 3, to see how Scaleformer may miss weekly "spikes" that SutraNets can predict). Furthermore, note that probablistic Scaleformer uses the non-SOTA Gaussian output distribution of DeepAR (Salinas et al., 2020), while SutraNets are evaluated with the SOTA approach of C2FAR (Bergsma et al., 2023).
Although Scaleformer with Transformers may be limited to short conditioning windows, the core *idea* of Scaleformer can be applied with RNNs. Indeed, we implemented and evaluated a similar method to Scaleformer, but for RNNs (and using C2FAR as the output distribution), which we called Low2HighFreq. We noted the connection to Scaleformer in our related work.
The full comparison (repeating results from our Table 1) is summarized in the following table:
| System | Length of conditioning | Length of prediction | CRPS/wQL on `electricity` | CRPS/wQL on `traffic` |
|--------------------------|------------------------|-----------------------|---------------|-----------|
| Informer (Table 9, Shabani et al., 2023) | 96 | 96 | 0.330 | 0.372 |
| Scaleformer (Table 9, Shabani et al., 2023) | 96 | 96 | 0.238 | 0.288 |
| seasonal-naive-1week | 168 | 168 | 0.111 | 0.175 |
| Low2HighFreq (Scaleformer for RNNs) | 168 | 168 | 0.082 | 0.166 |
| SutraNets (Backfill-alt) | 168 | 168 | 0.074 | 0.128 |
Here we *predict* 168 steps ahead vs. 96 for Scaleformer (which should disadvantage *our* results). In advance of the camera deadline, we will repeat these evaluations to use length-96 prediction windows, and we will also train versions of our system using length-96 conditioning. Also note Scaleformer evaluates using CRPS, while we evaluate using wQL, a 10-point approximation to CRPS (for point predictors like seasonal-naive-1week, note CRPS reduces to normalized deviation). While in some ways it is unfair to compare systems using different amounts of context, it is nevertheless quite notable that Scaleformer, FEDformer, Autoformer, TimesNet only condition on 96 historical values for the hourly predictions in all their experiments. We evaluated with up to length-2016 contexts, or 21X the length used in Scaleformer. The above proposed table should provide helpful information - for both practitioners and researchers - on the potential cost of restricting the look-back context. | null | null | null | null | null | null |
Doubly Robust Augmented Transfer for Meta-Reinforcement Learning | Accept (poster) | Summary: This study introduces the DRaT (Doubly Robust Augmented Transfer) algorithm, an advanced extension of hindsight-based transfer methods, which tackles not only reward mismatches but also discrepancies in transition dynamics. The authors provide a theoretical analysis to establish the optimality of their interval-based approximation, DRaE (Doubly Robust Augmented Estimator), in calculating the optimal importance sampling weight. Through empirical evaluation, it is demonstrated that DRaT surpasses conventional hindsight-based methods in performance, particularly in sparse-reward MuJoCo tasks that involve varying reward structures and transition dynamics.
Strengths: **S1.** Addressing a Critical Yet Underexplored Aspect of Transfer Learning
This study tackles an essential but often overlooked aspect of transfer, specifically in the context of varying transition dynamics. The authors have put forth commendable efforts in highlighting the significance of calculating the optimal dynamics importance weight, which is a critical component in transfer under changing transition dynamics.
**S2.** Solid Theoretical Foundation and Novel Approximation Method
The work presents a robust theoretical underpinning for the proposed method, ensuring reliability and efficacy. The use of a tractable interval-based approximation is innovative and enhances the practicality of the approach. The proof confirming that the mean squared error (MSE) is guaranteed to encompass the optimum is both rigorous and convincing.
**S3.** Thorough Empirical Evaluation in Diverse Scenarios
The authors have conducted a comprehensive empirical evaluation by testing their method in three environments with sparse rewards and three with varying state dynamics. Such thorough testing provides a more complete picture of the algorithm's capabilities. Impressively, the proposed method demonstrates substantial and consistent improvements over all baselines, attesting to its effectiveness and robustness.
Weaknesses: **W1.** Computational Overhead
A notable drawback of this study, which the authors have acknowledged in the limitations section, is the computational burden associated with calculating the doubly robust estimate and training the dynamics network. Given that practicality is an essential factor in the real-world application of algorithms, it is crucial to consider computational efficiency. It would be beneficial if the authors could include a comparison of the wall-clock time for each method, as this would provide valuable insights into the trade-offs between performance gains and computational costs.
**Acknowledgment Following Rebuttal**
I have reviewed the author's response. The response addressed my concerns.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: **Q1.** Results in Meta-RL are typically characterized by a high degree of variability, necessitating the aggregation of multiple runs for reliable evaluation. However, it appears that the number of seeds used for evaluation is not reported in the manuscript or the appendix. Could the authors clarify how many seeds were employed and consider including this information for a more transparent and rigorous evaluation?
**Q2.** Given the computational overhead associated with the proposed method, as discussed earlier, are there any strategies or optimizations that the authors have considered to enhance its computational efficiency? Insights into potential avenues for reducing computational costs without compromising effectiveness would be valuable.
**Q3.** Contrary to the checklist, I think the authors didn't submit the code or link to their codes. Any plans for releasing the code?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1 - Computational Overhead Analysis and Wall-Clock Time
Comparison:** Thanks for this valuable suggestion. Due to the space limit, please refer to our General Response to the Common Concern of "Providing Time Complexity Analysis of DRaT", for the detail discussion on the computational overhead analysis of our DRaT and wall-clock time comparison of all the methods.
****
**Comment 2 - Strategies to Further Reduce Complexity of DRaT:** Thanks
for this valuable suggestion. In the final version, we will provide some
insights into potential avenues for reducing computational costs of our
proposed DRaT, as follows.
Here, we discuss about some possible strategies for further reducing the
computational cost of DRaT without compromising too much of the
effectiveness. For example, to reduce the computational cost of training
dynamics prediction network for each training task, we may consider
training a meta-dynamics prediction network, which can make prediction
for all the tasks with a single network, and thus eliminate the need of
training a separate dynamics prediction network for each task. To also
reduce the computational cost of DR estimation, we may consider a
similar solution in TD($n$), which makes a trade-off between TD($0$) and
Monte-Carlo estimation by using the $n$-step rollout and fitted network.
****
**Comment 3 - Choice of Random Seeds:** Thanks for this valuable
suggestion. We will report in the final version that the results of each
algorithm are averaged across 3 random seeds, which is also a common
setting in the baseline meta-RL algorithms, such as PEARL \[4\] and
PromP \[19\]. Please also note that the evaluation curves reported in
this paper are obtained by evaluating on a test task set that contains
dozens of tasks, which already reduces the randomness of evaluation in
the meta-RL setting. To validate this, we further perform an experiment
trial with 6 random seeds in the Ant-Params-Sparse and Humanoid-Params-Sparse environments. We
then split the trail into two groups of 3 random seeds and average these
two groups together. As shown in Fig. 4 of the additionally uploaded
PDF, there is no significant difference of performance between these two
groups.
****
**Comment 4 - Plans for Releasing the Code:** We are now optimizing our
source code, and will shortly release it right after we finish the code optimization.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: I'm grateful to the authors for addressing my concerns. Given the solid contribution of this work, I maintain my rating. | Summary: The paper identifies that some existing approaches ignore varying dynamics in meta-RL and proposes a new method for addressing this. In particular, the paper focuses on minimizing the mean-squared error of value functions and showed that doubly robust estimators suffer from a high-variance problem. Consequently, the paper proposes the doubly robust augmented estimator to estimate the importance weight of the dynamics, that reduces variance at the cost of introducing bias. The paper further identifies the intervals for which the importance weight should lie in, in order to maintain low variance. Finally, the paper provides empirical analysis to demonstrate that the proposed approach yields smaller value prediction error than existing baseline, and demonstrated that the proposed method can outperform existing baselines in certain control environments.
Strengths: - The paper is well-written and easy to follow in general.
- Visualizations that supports various claims regarding the behaviours of the estimators---the paper demonstrate the proposed estimator has lower error compared to existing approach, and that the policy importance sampling weights do increase to approximately $1$ as informative trajectories are sampled more frequently.
- The paper demonstrated that the proposed method is theoretically sound (under some assumptions) and its empirical performance is comparable (if not better) to existing baselines.
Weaknesses: ## Major Comments
- Figure 3 suggests that with same reward function but different dynamics, the current baseline is already fairly robust? Does this really suggest that in figure 2, the problem is stemming from sparse reward?
- This may have stemmed from a confusion. For section 5.2, page 9, line 329, the paper mentions that "..., while keeping all the other settings the same as in DRaT." Does that mean other approaches are now using DRaE?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - The appendix mentions that the proofs really follow through by assuming that $\gamma \rho_{\pi}^{ij}(t) \tilde{V}_{ij}^{DR} (s)$ is not correlated to other expectation terms. I am wondering why this is a reasonable assumption---in particular, it is not immediately obvious that this term does not dominate the expectation terms.
- How does different $\sigma$ for approximating $N(s', \sigma)$ affect the estimation? Is there a sensitivity analysis regarding this? My expectation is that very small $\sigma$ would cause stability problem, while very high $\sigma$ would prevent learning. Is there an informed way to select this?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The paper has listed few limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1 - Concern on Figures 2 and 3:**
Please note that though compared to Fig. 2, the current baselines (e.g.,
PEARL \[4\] and HFR \[15\]) in Fig. 3 may have a smaller performance gap
with our DRaT, they still suffer from a larger standard deviation of
performance (as indicated by the larger shaded area in Figure 3). Such a
larger standard deviation of performance would still affect their
robustness. In comparison, our DRaT presents in general a smaller
standard deviation of performance in Fig. 3.
In comparison, the environments in Fig. 2 present an extremely
challenging scenario with both the sparse-reward and varying dynamics
settings. Specifically, the problem stemmed from sparse-reward is that
the robot can only get reward signals indicating its distance to the
goal position when it gets close to the goal position within a small
range. On the other hand, the dynamics of this robot may also vary by
setting different values of environment parameters, including the body
mass, body inertia, joint damping, and body component's friction. As
shown in Fig. 2, both the sparse-reward and varying dynamics settings
bring challenges to the training of meta-RL algorithms.
****
**Comment 2 - Confusion on Using DRaE in Other Approaches:** There might be
some misunderstandings here. The original sentence in Line 329, Page 9
is: "In IST, we use the IS estimator instead of our DRaE, while keeping
all the other settings the same as in DRaT."
Please note that this Importance Sampling augmented Transfer (IST)
method is simply a variant of our Doubly Robust augmented Transfer
(DRaT) method, with the same hyperparameter settings for network
training and task relabeling as in DRaT, except for using the Importance
Sampling (IS) estimator instead of the proposed Doubly Robust augmented
Estimator (DRaE). While for the other comparison approaches, they still
used their original value networks for value estimation, and did not
utilize our proposed DRaE.
****
**Comment 3 - Assumption on $\gamma \rho^{ij}\_{\pi}(t) \tilde{V}^{DR}\_{ij}(s\_{t+1}) $ Loosely
Correlated to Other Expectation Terms inside the Parentheses in Appendix
A.7:** Thanks for pointing out this issue. In the following, we will
briefly explain why this assumption is made, which will be further
included in the final version.
The original equation in Appendix A.7, after letting the first-order derivative of the objective function be zero, is given by:
\begin{align*}
&2\mathbb E\_t \Bigg[ \Big(\gamma \rho^{ij}\_{\pi}(t) \Big)^2 \tilde{V}^{DR}\_{ij}(s\_{t+1}) \Big(\hat{\rho}^{ij}\_d(t) \tilde{V}^{DR}\_{ij}(s_{t+1}) - {\rho}^{ij}\_d(t) V^{DR}\_{ij}(s_{t+1}) \Big)\Bigg]
\\\\
&+ 2\mathbb{E}\_t \Bigg[ \gamma \rho^{ij}\_{\pi}(t) \tilde{V}^{DR}\_{ij}(s_{t+1}) \Bigg( \rho^{ij}\_{\pi}(t) \gamma \left( \hat{\rho}^{ij}\_d(t) \tilde{V}^{DR}\_{ij}(s\_{t+1}) -\mathbb{E}\_{t+1}[V^j(s\_{t+1})] \right) - \rho^{ij}\_{\pi}(t)\Delta(s\_t,a\_t) +\bar{V}\_{\theta}(s\_t,z\_j)\Bigg) \Bigg] = 0.
\end{align*}
By eliminating the constant of 2 and further merging the two expectations on the left-hand side into one expectation, we have:
\begin{align*}
\mathbb E\_t \Bigg[ \gamma \rho^{ij}\_{\pi}(t) \tilde{V}^{DR}\_{ij}(s\_{t+1}) & \cdot \Bigg(2\gamma \rho^{ij}\_{\pi}(t)\hat{\rho}^{ij}\_d(t) \tilde{V}^{DR}\_{ij}(s\_{t+1})
\\\\
&- \gamma \rho^{ij}\_{\pi}(t){\rho}^{ij}\_d(t) V^{DR}\_{ij}(s_{t+1}) -\gamma \rho^{ij}\_{\pi}(t)\mathbb{E}\_{t+1}[V^j(s\_{t+1})] - \rho^{ij}\_{\pi}(t)\Delta(s\_t,a\_t) +\bar{V}\_{\theta}(s\_t,z\_j)\Bigg)\Bigg] =0.
\end{align*}
Note that inside expectation on the left-hand side is the multiplication of $\gamma \rho^{ij}\_{\pi}(t) \tilde{V}^{DR}\_{ij}(s\_{t+1}) $ with a summation enclosed by the parentheses, which can be rewritten as:
\begin{align*}
&\Bigg(2\gamma \rho^{ij}\_{\pi}(t)\hat{\rho}^{ij}\_d(t) \tilde{V}^{DR}_{ij}(s\_{t+1}) -\gamma \rho^{ij}\_{\pi}(t){\rho}^{ij}\_d(t) V^{DR}\_{ij}(s\_{t+1}) - \rho^{ij}\_{\pi}(t)\Delta(s\_t,a\_t) +\bar{V}\_{\theta}(s\_t,z\_j)\Bigg)
\\\\
=& \gamma \rho^{ij}\_{\pi}(t) \Bigg(2\hat{\rho}^{ij}\_d(t) \tilde{V}^{DR}\_{ij}(s\_{t+1}) - {\rho}^{ij}\_d(t) V^{DR}\_{ij}(s\_{t+1}) -\mathbb{E}\_{t+1}[V^j(s\_{t+1})] \Bigg) - \rho^{ij}\_{\pi}(t)\Delta(s\_t,a\_t) +\bar{V}\_{\theta}(s\_t,z\_j).
\end{align*}
On the right-hand side of this equation, the first three terms are closely correlated to $\gamma \rho^{ij}\_{\pi}(t) \tilde{V}^{DR}\_{ij}(s\_{t+1}) $, while the last two terms are loosely correlated to it. Furthermore, given that $\hat{\rho}^{ij}\_d(t)$ is inside an interval upper-bounded by its true value ${\rho}^{ij}\_d(t)$, the value of $\hat{\rho}^{ij}\_d(t) \tilde{V}^{DR}\_{ij}(s_{t+1})$ is comparable to those of ${\rho}^{ij}\_d(t) V^{DR}\_{ij}(s\_{t+1})$ and $\mathbb{E}\_{t+1}[V^j(s\_{t+1})] $. Thus, the first three terms will be compensated by each other, while value of the last two terms $ - \rho^{ij}\_{\pi}(t)\Delta(s\_t,a\_t) +\bar{V}\_{\theta}(s\_t,z\_j)$ will dominate, which is loosely correlated to $\gamma \rho^{ij}\_{\pi}(t) \tilde{V}^{DR}\_{ij}(s\_{t+1}) $.
****
**Comment 4 - Sensitivity Analysis of $\sigma$:** Thanks for this
valuable suggestion. As will be included in the final version, we
conduct experiments on the sensitivity analysis of $\sigma$ for the
Ant-Params-Sparse environments. By varying the values of $\sigma$,
we plot the curves of returns in Fig. 3 of the additionally
uploaded PDF. As expected, a smaller value of $\sigma$ (e.g., when
$\sigma=0.1$) would lead to a performance degradation at the initial
training stage, but this performance gap will be reduced as the training
proceeds. On the other hand, a larger value of $\sigma$ (e.g., when
$\sigma > 2$) may incur a very high value for the value estimation and
thus prevent the learning, the training curves of which are thus not
reported in Fig. 3 of the additionally uploaded PDF.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed answers and have addressed my questions. | Summary: This paper focuses on the meta-reinforcement learning setting with sparse reward. Previous work with hindsight-based sample transfer approaches requires the assumption that tasks differ only in reward functions. This paper proposes a doubly robust augmented transfer (DRaT) approach that allows both dynamics mismatches and varying reward functions across tasks. They first theoretically derive an upper bound of mean squared error between the estimated values of transferred samples and their true values in the target task. Then they propose an interval-based approximation to empirically find the optimal importance weight. In the experiment part, they implement DRaT on top of an off-policy meta-RL method and show that this method outperforms hindsight-based approaches on various sparse-reward MuJoCo locomotion tasks.
Strengths: 1. This paper is well-written and easy to follow. The motivation for using doubly robust estimators to solve the sparse reward problem in meta-RL is convincing.
2. The theoretical analysis of the doubly robust augmented estimator is detailed and well-motivated.
3. The empirical implementation of the proposed algorithm outperforms other baselines by a large margin.
Weaknesses: 1. The details about the meta-RL setting in experiments are not provided. The authors mention that they generate various dynamics by randomly sampling the environment parameters, including body mass, body inertia, joint damping, and body component friction. However, I cannot find details about the range of those parameters.
2. The authors say that “One family contains the sparse-reward environments with varying reward functions and dynamics”. I cannot find details about how the reward functions are different in the settings. Why does “control the arm of a 3D robot to reach random goals in the 3D space” has different reward functions?
3. The algorithm requires the reward function for relabeling and selecting informative trajectories, which is similar to previous hindsight-based methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Please provide the missing details in the experiment as I mentioned in the weaknesses.
2. Why use PEARL as the backbone? I am not familiar with the area of meta-RL with sparse reward, but I wonder if there is any later work that outputs PEARL in the meta-RL setting. If yes, maybe more baselines should be reported.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As discussed in the conclusion section, this method requires the reward function, which may not be available in some tasks. Since this is a common setting in existing methods for sparse reward meta-RL, it is still acceptable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1- Providing Detailed Experiment Settings:** Thanks for this
valuable suggestion. We will provide in the final version more details
about the meta-RL setting in experiments, as follows.
The randomization of dynamics on all the environments in our experiments
are implemented by generating different environment parameters through:
$$
param_{ij} = \beta_j*initparam_{i},
$$
where $\beta_j = A^{x_j}, x_j\sim Uniform(-B, B)$, $A$ and $B$ are the
constants which control the generation of $\beta_j$ for each environment
parameter of task $j$, and $initparam_{i}$ is the initial value of the
$i$-th environment parameter. Overall, these randomly sampled
environment parameters include the body mass, body inertia, joint
damping, and body component's friction, for which the values of constant
pair $(A,B)$ are listed in Table 2 of the additionally uploaded PDF.
With the initial values $initparam_{i}$ loaded directly from the
original file of "mujoco_py", the randomized environment parameter
$param_{ij}$ is then obtained and set on the MuJoCo simulation engine to
generate various environment dynamics.
Please note that since the robot models in our testing environments are
complicated with a large number of environment parameters, these initial
values of $initparam_{i}$ are usually stored as high-dimensional
arrays in the "mujoco_py" file. Due to space limit, we only show the
initial values set for the Ant-Params-Sparse environments in Fig. 2 of
the additionally uploaded PDF as an example. For more details about
settings of these initial values, please refer to either the "mujoco_py"
file that is publicly available, or our source code that will be
released shortly after the code optimization.
****
**Comment 2 - Using PEARL as Backbone:** We adopted PEARL as the off-policy
meta-RL backbone in this paper, mainly because it is a sample-efficient
meta-RL algorithm, which significantly outperforms the other baseline
meta-RL algorithms, such as MAML \[12\], PromP \[19\] and RL2 \[3\] on
the standard meta-RL testing benchmark. This also explains why the
following works that further tackle the sparse-reward challenge in
meta-RL are mostly built upon the PEARL backbone, such as the Hindsight
Task Relabeling (HTR) \[14\] and Hindsight Foresight Relabeling (HFR)
\[15\]. These methods have demonstrated a superiority over PEARL in the
sparse-reward setting, and also were compared with our DRaT in the
experiments section.
[3] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel. RL2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
[12] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017.
[14] C. Packer, P. Abbeel, and J.h E Gonzalez. Hindsight task relabelling: Experience replay for sparse reward meta-RL. In NeurIPS, 2021.
[15] M. Wan, J. Peng, and T. Gangwani. Hindsight foresight relabeling for meta-reinforcement learning. In ICLR, 2022.
[19] J. Rothfuss, D. Lee, I. Clavera, T. Asfour, and P. Abbeel. Promp: Proximal meta-policy search. In ICLR, 2019.
****
**Comment 3 - Details about Settings of Different Reward Functions:** Thanks
for pointing out this issue. In Section B.2 of the Supplementary
Material, we had provided the reward function settings for the
sparse-reward environments with varying rewards and dynamics, while the
reward function settings for the dense-reward environments were stated
as using the same implementation as in PEARL's open-sourced code. In the
following, we will provide further explanation to make the settings of
different reward functions clearer.
The reward function defined for the Point-Robot environments, which
control the arm of a 3D robot to reach a random goal position in the 3D
space, is given as: $$reward=
\begin{cases}
-dist(robot,goal)+C & & if\ dist(robot,goal)<D,\\\\
0 & & otherwise.\\\\
\end{cases} $$ We set $C=0.0$, $D=+\infty$ for the dense-reward
function in Point-Robot-Params, where the reward signal stands for the
distance between the tip of robot arm and the goal position, which can
always be accessed. We set $C=1.0$ and $D=0.2$ for the sparse-reward
function in Point-Robot-Params-Sparse, where the RL agent will get a
meaningful reward signal that indicates the distance to the goal only
when the robot arm is close to its goal within a small range. In both
cases, goal positions are randomly generated in the 3D space, and hence
the reward functions are also varying to indicate a distinct distance to
different goal positions.
****
**Comment 4 - Concern on Requiring Reward Function for Relabeling and
Selecting Informative Trajectories:** Please note that the commonly used
hindsight-based methods, such as HTR \[14\] and AIR \[6\], simply use
the reward function to relabel samples without accommodating to the
dynamics mismatch across tasks. Our experiments in Section 5.1
demonstrated such an inefficiency of HTR and AIR on the extremely
challenging environments with both sparse-reward and varying dynamics
settings.
Different from them, we design a doubly robust augmented estimator
(DRaE) to accommodate to the more general and rational meta-RL setting
with both sparse-reward and varying dynamics across different tasks.
DRaE can tackle the mismatch of dynamics distributions in meta-RL with a
guaranteed optimum for the dynamics importance weight by minimizing MSE
between the estimated and true values of the value function. Using DRaE
for a better value estimation of transferred samples, our proposed DRaT
algorithm demonstrated its superiority on several meta-RL testing
benchmarks, as verified by the experiments in Sections 5.1 and 5.2.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for addressing my concerns. The experiment setting is now clear to me. I don't have further questions and I am happy to increase my score to a clear accept. | Summary: The paper introduces Doubly Robust Augmented Transfer (DRaT), a novel approach for dealing with sparse-reward scenarios in meta-reinforcement learning (Meta-RL). DRaT transfers informative trajectories from various tasks to a target task, effectively handling dynamics mismatches and different reward functions. The authors propose an interval-based approximation to the importance weight. The DRaT algorithm is applied to an off-policy Meta-RL baseline, demonstrating superior performance over other hindsight-based methods on various sparse-reward MuJoCo tasks with different dynamics and reward functions.
Strengths: **Originality**: The paper presents a novel approach, Doubly Robust Augmented Transfer (DRaT), for dealing with sparse-reward scenarios in meta-reinforcement learning. The idea of transferring informative trajectories from various tasks to a target task is an interesting direction in the field of meta-reinforcement learning.
**Quality**: DRaT effectively handles dynamics mismatches and different reward functions across tasks, for the MuJoCo environments it is tested on. The method outperforms other hindsight-based methods on various sparse-reward tasks. However, without a detailed analysis of the time complexity of the algorithm or a comparison with other state-of-the-art methods in terms of computational efficiency, it's hard to fully assess the quality of the method.
**Clarity**: While the paper presents complex ideas, it does so in a clear and understandable manner. The authors have done a good job of explaining their method and its benefits. However, some sections could benefit from more detailed explanations or examples, particularly for readers who are not deeply familiar with the field.
**Significance**: The significance of the paper is evident in its potential impact on the field of meta-reinforcement learning. DRaT addresses a key challenge in the field - dealing with sparse-reward scenarios. By demonstrating superior performance on various tasks, the paper shows that the method could be a valuable tool for researchers and practitioners in the field. However, the real-world applicability and scalability of the method would need to be further explored to fully understand its significance.
Weaknesses: **More Extensive Evaluation**: While the paper demonstrates the effectiveness of DRaT on various sparse-reward tasks, a more extensive evaluation would strengthen the paper's claims. The current evaluation is somewhat limited and does not fully explore the method's performance across a wide range of scenarios. The authors could consider using the [MetaWorld](https://arxiv.org/pdf/1910.10897.pdf) benchmark, a comprehensive benchmark of environments specifically designed for meta-reinforcement learning. This would provide a more rigorous testing ground for the DRaT method.
**Time Complexity Analysis**: The paper does not provide detailed information on the time complexity of DRaT. Information regarding the performance with respect to time in the experiments would be valuable in assessing the method's scalability and efficiency, particularly for large-scale problems.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is a separate dynamics prediction network fitted for each possible task 'j' in the training set?
2. Would it be possible to test the DRaT method using the MetaWorld benchmark, which is specifically designed for meta-reinforcement learning?
3. Could you provide a detailed analysis of the time complexity of the DRaT method to assess its scalability and efficiency?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1 - Providing Time Complexity Analysis of DRaT:** Thanks for this valuable suggestion. Due to the space limit, please refer to our General Response to the Common Concern of "Providing Time Complexity Analysis of DRaT" for the detail discussion on the time complexity analysis of DRaT, wall-clock time comparison of all the methods, and possible strategies for further reducing the computational cost of DRaT .
****
**Comment 2 - More Evaluation on MetaWorld Benchmark:** We reported in this paper an extensive
evaluation of our DRaT on MuJoCo environments, which are the commonly
adopted benchmark to evaluate meta-RL, such as in the meta-RL baselines
like PEARL \[4\], MAML \[12\], and PromP \[19\]. For a fair comparison
with PEARL, and with other comparison methods that are built upon PEARL
(e.g., HTR, HFR, AIR), we thus also adopt MuJoCo as the benchmark for
empirical evaluations. Please also note that the testing environments
adopted in Section 5.1 are extremely challenging with both the
sparse-reward and varying dynamics settings, where the baseline (PEARL)
or comparison algorithm (HTR) may even fail to learn a useful policy.
While our DRaT presents a significant performance gain, demonstrating
its effectiveness in these challenging environments.
Following the reviewer's advice, we have also tried to test our DRaT
using the MetaWorld benchmark. Due to the time constraint, we managed to
conduct experiments on a pushing task in MetaWorld, which controls a
simulated Sawyer arm to push a block to a specified goal position. We
also modify the original sparse-reward Sawyer-Push to
Sawyer-Push-Params-Sparse by generating different dynamics for each
sampled task. We compare DRaT with HFR and PEARL in Fig. 1 of the
additionally uploaded PDF, which is because they show better performance
than other baselines in MuJoCo, and also because HFR is the only one of
all the comparison methods that is tested in MetaWorld. Our DRaT still
presents a better performance than HFR and PEARL, while PEARL even fails
to improve its policy.
[4] K. Rakelly, A. Zhou, C. Finn, S. Levine, and D. Quillen. Efficient off-policy meta-reinforcement learning via probabilistic context variables. In ICML, 2019.
[12] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017.
[19] J. Rothfuss, D. Lee, I. Clavera, T. Asfour, and P. Abbeel. Promp: Proximal meta-policy search. In ICLR, 2019.
****
**Comment 3 - Concern on Dynamics Prediction Network:** As in Line 2 of our
proposed DRaT in Algorithm 1, a separate dynamics prediction network needs to
be fitted for each sampled training task. Please refer to our General Response to the Common Concern
of "Providing Time Complexity Analysis of DRaT" for the additional complexity
introduced by doing so, where we also discussed about possible strategy
that can be used to reduce this part of complexity, i.e., by training a
meta-dynamics prediction network to make prediction for all the tasks
with a network, and thus eliminating the need of training a separate
dynamics prediction network for each task.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
We appreciate your valuable suggestions which helped us in improving our paper. Since the end of discussion period is approaching, we were just wondering if there is any further question that we would like to answer at any time.
Thanks again for the time and effort you have dedicated to reviewing our paper and providing these insightful comments. | Rebuttal 1:
Rebuttal: **General Response:** We would like to thank all the reviewers for their helpful comments. Here, we will respond to the common concern on the time complexity of our proposed DRaT. For orther concerns, please see below our responses to each reviewer’s individual comments, where the newly added figures and tables can be found in our additionally uploaded PDF attached at the end of this General Response.
****
**Common Concern - Providing Time Complexity Analysis of DRaT:** As had been acknowledged in the limitations
section, additional computational complexity would be brought by our
DRaT for computation of DR estimate and training of dynamics prediction
network for each task. Here, **1)** we analytically show that this
additional complexity is comparable to its baseline PEARL's
computational complexity, with a linear scaling factor of
$\frac{\max(N_{\mathcal{B}}, L) }{N_{\mathcal{B}}} \cdot \frac{\max(c_1, c_2, c_3)}{c_1}$.
**2)** We then empirically verify this analysis by demonstrating that
the wall-clock time of DRaT is $1.3\times \sim 1.7\times$ that of PEARL,
and compare the wall-clock time of other state-of-the-art methods.
**3)** We also discuss about some possible strategies for further
enhancing DRaT's computational efficiency. These analyses will be
included in the final version.
**1) Complexity Analysis.**
Typically in an off-policy meta-RL algorithm, each epoch (i.e., meta-training iteration) is divided into the sampling and training phases. In the sampling phase, the computational cost stems mainly from the actions chosen by feed-forward computation of policy network and inference network. Since all the algorithms (i.e., HTR, HFR, AIR, DRaT) follow the same sampling process as PEARL, we only analyze the computational cost in the training phase.
**PEARL's complexity:** In the training phase, the computational cost contains mainly the feed-forward and back-propagation computation of policy network, value network and inference network. Given $K$ training iterations at each epoch, batch size $N_{\mathcal{B}}$ of transitions from $N$ training tasks, state space cardinality $\vert \mathcal{S} \vert$ and action space cardinality $\vert \mathcal{A} \vert$, and assuming a constant computational cost of feed-forward and back-propagation computation $c_1$, the total computational cost of training phase is $O(K \cdot N_{\mathcal{B}} \cdot N \cdot \vert \mathcal{S} \vert \cdot \vert \mathcal{A} \vert \cdot c_1 )$.
**DRaT's complexity:** Besides the same training cost $O(K \cdot N_{\mathcal{B}} \cdot N \cdot \vert \mathcal{S} \vert \cdot \vert \mathcal{A} \vert \cdot c_1 )$ as PEARL, additional computational cost brought by DRaT includes training of dynamics prediction networks, computation of DR estimator, and computation of relabelling.
- Considering building separate prediction networks for $N$ training tasks and the cost of feed-forward and back-propagation computation $c_1$, using the same batch of transitions for training, the additional cost of training dynamics prediction networks is also $O(K \cdot N_{\mathcal{B}} \cdot N \cdot \vert \mathcal{S} \vert \cdot \vert \mathcal{A} \vert \cdot c_1 )$.
- Considering sampling informative trajectories with a maximal length of $L$ for $N$ training tasks and the cost $c_2$ of DR estimation computation at each time step, the computational cost of DR estimation is $O(K \cdot N \cdot L \cdot \vert \mathcal{S} \vert \cdot \vert \mathcal{A} \vert \cdot c_2)$.
- In this paper, we used the approximate inverse RL relabeling (AIR). We sample one trajectory from each training task for relabeling, leading to $N$ candidate trajectories in total. Assuming the cost of computing relabeled reward at each time step is $c_3$, the computational cost of relabeling is $O(K \cdot N \cdot L \cdot \vert \mathcal{S} \vert \cdot \vert \mathcal{A} \vert \cdot c_3)$.
Incorporating them together, we conclude that the additional computational complexity of DRaT is dominated by $O\Big(K \cdot N \cdot \max ( N_{\mathcal{B}}, L ) \cdot \vert \mathcal{S} \vert \cdot \vert \mathcal{A} \vert \cdot \max ( c_1, c_2, c_3 ) \Big)$, which is comparable to PEARL with a linear scaling factor of $\frac{\max(N_{\mathcal{B}}, L) }{N_{\mathcal{B}}} \cdot \frac{\max(c_1, c_2, c_3)}{c_1}$.
**2\) Wall-Clock Time Comparison.**
To empirically verify the above complexity analysis, in Table 1 of the
additionally uploaded PDF, we also compare the wall-clock time for DRaT
and all the comparison methods, including PEARL, HTR, HFR, AIR. It can
be seen that for these robotic control tasks at different scales, DRaT
generally consumes $1.3\times \sim 1.7\times$ running time of PEARL,
which is also comparable to other baselines.
**3\) Strategies to Further Reduce Complexity.**
Here, we discuss about some possible strategies for further reducing the
computational cost of DRaT without compromising too much of the
effectiveness. For example, to reduce the computational cost of training
dynamics prediction network for each training task, we may consider
training a meta-dynamics prediction network, which can make prediction
for all the tasks with a single network, and thus eliminate the need of
training a separate dynamics prediction network for each task. To also
reduce the computational cost of DR estimation, we may consider a
similar solution in TD($n$), which makes a trade-off between TD($0$) and
Monte-Carlo estimation by using the $n$-step rollout and fitted network.
****
Pdf: /pdf/eb3826107e173566ee5e7f0622d27fae1a701528.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
RH-BrainFS: Regional Heterogeneous Multimodal Brain Networks Fusion Strategy | Accept (poster) | Summary: The submission is not in my area, and it's difficult to give reasonable comments about this submission. My research interests focus on medical image reconstruction. please find another appropriate reviewer to review this paper.
Strengths: N/A
Weaknesses: N/A
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | null | Summary: This paper proposes a novel approach called the Regional Heterogeneous Multimodal Brain Networks Fusion Strategy (RH-BrainFS) to address the issue of regional heterogeneity between structural connectivity (SC) and functional connectivity (FC) in brain networks fusion. The proposed approach includes a brain subgraph networks module to extract regional characteristics of brain networks and a transformer-based fusion bottleneck module to alleviate the issue of regional heterogeneity between SC and FC. The authors claim that this is the first work to propose a solution to the issue of structural-functional modal regional heterogeneity. The paper presents extensive experiments that demonstrate that the proposed method outperforms several state-of-the-art methods in a variety of neuroscience tasks.
Strengths: - The paper is well-structured and clearly written. The authors provide a clear introduction to the problem, a thorough review of related work, and a detailed description of their proposed method. They also clearly explain the experimental settings and datasets used in their research. The use of technical terms is appropriate for the intended audience, and the authors provide sufficient context and explanation to make the content understandable.
- The issue of regional heterogeneity in multimodal brain networks is a significant challenge in the field, and the authors' proposed solution has the potential to improve the performance of multimodal fusion models.
- The authors provided detailed preprocessingand implementation parameters.
Weaknesses: - It is important for the authors to compare their proposed approach with existing methods in the field of multimodal brain network modeling. One such method is the joint embedding of structural and functional brain networks with graph neural networks proposed by Zhu et al.. The authors should compare their proposed approach with this method in terms of performance for modality fusion.
- The transformer-based fusion is not original in terms of modality fusion. The authors are suggested to investigate and provide concrete interpretations on the fusion bottlenecks which utilize shared characteristics between modalities.
- Lack of variant on GNN backbone. There are a bunch of GNN architectures for brain network modeling. While the paper acknowledges the use of a BrainSubGNN method, it would be valuable to consider other GNN architectures that have been developed specifically for brain network analysis. By incorporating a broader range of GNN backbones and comparing their performance, the authors can enhance the robustness and generalizability of their proposed approach for multimodal brain network fusion.
- Lack of interpretation analysis. In the context of multimodal brain network modeling, interpretation analysis could involve identifying the brain regions or networks that are most relevant for the classification task, and investigating the functional and structural connectivity patterns that contribute to these regions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How does the proposed approach compare to other state-of-the-art methods in terms of computational efficiency? Is the proposed approach computationally feasible for large-scale datasets?
- The paper mentions that the proposed approach includes a brain subgraph networks module to extract regional characteristics of brain networks. Can the authors provide more details on the design motivation this module, as well as its neurological justification?
- How would the subgraph sampling affect the effectiveness of the overall framework?
- Would there be any potential modality bias in the experimented data?
- Could you elaborate on how woyld the different brain imaging techniques mentioned in the text, such as functional magnetic resonance imaging (fMRI) and diffusion magnetic resonance imaging (dMRI)? reflect different aspects of the brain's internal characteristics?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have acknowledged some of the limitations of their work, such as the limited sample size and the difficulty of data collection in neuroscience. However, the paper does not provide a detailed discussion on the technical bias, e.g., modality bias during fusion of the proposed approach, nor does it address potential negative societal impact of the research. Adding those discussions would enhance the paper's credibility and relevance, and help readers better understand the potential implications of the proposed approach.
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Answers to weaknesses:
1. For the first weakness, in fact, we have followed this work (Zhu et al.) , but this work does not disclose their code, and we cannot compare them.
2. For the second weakness, the initial motivation for fusion bottleneck was that we believed that there was regional heterogeneity between SC and FC, and thus we believed that their features or potential embeddings would be misaligned, and thus direct interaction would lead to poorer results. Therefore, we proposed indirect interactions for fusion bottlenecks, and later also experimentally verified the effectiveness of indirect interactions (compare the ablation experiments in the third and fourth rows of Table 2). Therefore, we removed the direct interaction between SC and FC (i.e. direct interaction is forbidden) and adopted the indirect interaction of the fusion bottleneck.
3. For the third weakness, thanks to this suggestion, we have added this ablation experiment by replacing the BrainSubGNN module with four common GNN frameworks, and the results are shown in the table. (Note: Because the computational efficiency of our model is not as good as other methods, we only chose a simple GNN as the backbone, not a complex GNN).
| Backbone | Two\-site Dataset | | | | |
|:--------------------:|:-----------------:|:------------:|:------------:|:------------:|:------------:|
| | Acc | Sen | Spe | F1 | Auc |
| GCN | 70\.04±1\.16 | 65\.40±5\.26 | 74\.34±4\.39 | 66\.25±3\.23 | 69\.87±1\.13 |
| GAT | 71\.08±1\.26 | 71\.83±7\.87 | 70\.02±8\.80 | 69\.23±2\.70 | 70\.93±1\.30 |
| GIN | 74\.65±0\.99 | 70\.61±7\.49 | 78\.50±5\.99 | 72\.07±3\.19 | 74\.56±1\.18 |
| GraphSAGE | 70\.03±1\.63 | 66\.40±5\.94 | 73\.16±6\.34 | 67\.12±2\.51 | 69\.78±1\.63 |
| BrainSubGNN \(ours\) | **78\.48±1\.43** | **76\.20±4\.06** | **80\.72±3\.60** | **77\.35±1\.97** | **78\.46±1\.43** |
4. For the fourth weakness, thanks for the good advice, we had thought about doing brain region importance analyses, but SC and FC are done separately, meaning that we can only know separately which brain regions are important for SC and which brain regions are important for FC, without an overall result. Secondly, we thought brain region importance analyses were not very relevant to our research motivation, and we were more focussed on solving the problem of regional heterogeneity than on studying brain region importance.
Answers to questions:
1. For the first question, we supplemented the time-consumption experiment (record the average training time for 100 epoches and the average time for 1 inference on the Two-site dataset), and as we would expect, our method incurred a greater time overhead.
To explain such results, we believe that firstly MGCN, GBDM and MMGNN are more focussed on researching on processing of data features, so common GNN frameworks (which are simpler compared to Transformer or Attention) are used in the network structure, so they have better good computational efficiency.
Secondly, we consider AL-NEGAT to be an attentional-based approach that has similarities with our Transformer-based, so AL-NEGAT generates a time overhead that belongs to the same order of magnitude as Ours.
In response to such results, we admit that our method does underperform in terms of computational efficiency, but falls within an acceptable range, after all, our model performance is better improved.
Finally, our work can be adapted to large-scale data because each brain network graph is not large (only 90 nodes), so there is no problem of large-scale incomputability.
| **Method** | **Train Tine \(s / 100 epoches\)** | **Inference Time Cost \(s / 1 inference\)** |
|:-------------:|:-----------------------------------:|:--------------------------------------------:|
| **MGCN** | 23\.875 | 0\.017 |
| **GBDM** | 20\.388 | 0\.013 |
| **MMGNN** | 23\.905 | 0\.016 |
| **AL\-NEGAT** | 31\.573 | 0\.023 |
| **Ours** | 33\.875 | 0\.025 |
2. For the second question, ss we mentioned inside introduction section (in line 42-43), the brain network itself has strong llocal characteristics, so we used a brain subgraph network to extract local characteristics efficiently.
3. For the third question, different subgraph sampling strategies extract different brain local characteristics, and different hop counts reflect different brain local characteristics (refer to Fig. 5 of the ablation experiment), as stated in lines 329 to 331 of the paper, too large subgraph sampling hop counts will make the sampled subgraphs not have local characteristics
4. For the fourth and fifth questions, structural neuroimaging datadiffusion magnetic resonance imaging (dMRI), reflecting voxel tissue density/volume or structural connectivity. The main purpose of structural studies is to reveal anatomical relationships in the brain, which can then be used for prediction.
Functional neuroimaging datafunctional magnetic resonance imaging (fMRI), reflecting changes in deoxyhemoglobin concentration caused by task-induced or spontaneously regulated neurometabolic. The main purpose of functional studies focuses on dynamic changes in brain activity or connectivity.
Because of the essential differences, there is necessarily a modal difference between structure and function.
Answers to limitations:
1. Thank you for your suggestion, we really should discuss our technical bias in terms of limitations. Regarding the mentioned negative social impacts, we have explained them in detail in the global rebuttal, and in short we can assure you that this work will not have any negative social impacts.
---
Rebuttal 2:
Title: Response to Rebuttal
Comment: Thanks the authors for the response and the added experiments. After reading the rebuttal, I still think compare with existing multimodal methods for brain network modeling is needed. I would like to remain my rating.
---
Rebuttal Comment 2.1:
Comment: Thank you for all your previous comments.
Regarding the issue of the comparison method, I have emailed the authors (Zhu et al.) for the code a long time ago, but did not receive a reply. In the section 4.2, we have compared many existing multimodal methods for brain network modeling (e.g. MGCN, GBDM, MMGNN and AL-NEGAT), and we will describe these more detailed later in the revised version. | Summary: The paper identifies a gap in the literature of multimodal brain networks fusion in which current methods are said to only use "simple patterns" to fuse modalities, ie, concatenation, weighted summation, and self-attention. To tackle this issue, the paper proposes RH-BrainFS, a new model fusing structural connectivity and functional connectivity data constructed from fMRI and dMRI, respectively. This model includes a BrainSubGNN model for subgraph sampling and subgraph embedding processes, as well as a fusion bottleneck based on transformers.
Strengths: The results of the experiments (as seen in table 1) seem to be particularly strong, and this work is very significant given the lack of varied work in the field of multimodal brain imaging fusion. It is very good that the authors use more traditional ML models beyond deep learning (ie, SVM and random forests) to evaluate the relevance of this work, and that several metrics are considered beyond the simple accuracy metric. Finally, it usually takes a lot of time and manual work to preprocess dMRI data to be in the same parcellation as the fMRI data, thus the fact that this data is not available out-of-the-box clearly adds to the significance of this applied work - I hope the authors will make this preprocessed dataset available at a certain point.
---------------------------------
EDIT AFTER REBUTTAL PERIOD:
From the rebuttal period, the authors did a very good job in addressing most of my concerns, and therefore I'm increasing my decision from borderline accept to weak accept, the soundness score from 2 to 3, and the presentation score from 3 to 4.
Weaknesses: I identify three main weaknesses that require clarification during the rebuttal process and are the reason for me to give a borderline accept despite the strengths. I will be happy to review my score as a result of the rebuttal process if my points are properly tackled. I number my comments for easier discussion.
1. A key weakness of this work is that it seems to me that there's an overstatement when the authors write that this is the first work to propose a solution to the issue of strutural-functional model regional heterogeneity. Maybe this issue is ill-defined and need a more clear description, otherwise I don't see how some of the previous works mentioned do not try to create models that take into account different functional-structural interactions on different regions of the brain. For example, work [59] (mentioned in introduction as an example of "self-attention" technique) seems to use attention layers at different layers of the model and due to its (attention) nature, it seems too much to call it a "simple pattern". Similarly, for work [58], I believe the authors are oversimplifying it by classifying it as just a "weighted summation", as it seems to me to be a much more complex usage of graph neural networks (GNNs) to fuse in a non-linear fashion different modalities of the brain. Indeed, work [58] seems to follow other works that I've seen leveraging GNNs to fuse different brain modalities (eg, 10.48550/arXiv.2007.09777, 10.1016/j.media.2022.102471, 10.1007/978-3-030-32248-9_89) which the paper did not mention and for me they are clearly tackling the issue of funtional-structural interactions even if they did not use this exact name. Finally, even for the simplest case of just concatenating (embedded) features it doesn't mean a model is making a "one-to-one mapping" as defended in the paper.
2. A second key weakness I see in this paper is how some decisions seem to be justified/made in section 3.3.1. Section 3.3.1 starts with "Due to the issue of regional heterogeneity between SC and FC, it is obviously not feasible to pass messages directly between two modalities". What do the authors mean by "messages" here and why is this so obvious? Wouldn't "messages" between the different modalities actually help in the issue of not just concatenate two modalities? This seems to be related to lines 187-189, when the paper says that it avoids direct interactions between modalities. Why is direct communication such a bad thing and even called a "forbidden" interaction as indicated in figure 3? It seems to me that direct communication, together with other means of "communication" could help any model achieve better performance. Could maybe the authors justify this point a bit better? A possible ablation analysis I see here to support this statement would be to remove Z_b altogether, and this would imply that just one transformer would be required to fuse functional/structural data, greatly reducing the complexity of the model for (maybe) a similar performance.
3. The final key weakness I have to identify in this paper is that it's not clear to me whether the results are overly-optimistic, and I will appreciate any clarification the authors can provide on this if I misunderstood this point. The paper seems to use a single train/validation set (for each fold of the 10-fold CV procedure), therefore the evaluations in table 1 seem to be made on the validation set. It also seems the case that this same validation set is used to know when to stop the training procedure. If I understood this correctly, then the results in table 1 can be overly-optimistic, as the same data used in training (eg, to know when to stop) was also used for final evaluation.
(Small typo, in Line 203, should not capitalise "We")
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Beyond what I wrote in the "Weaknesses" section, I have a few more questions:
1. Am I correct to understand that both the functional and structural modality need to be parcellated with the same atlas? In figure 1's representation I got the impression that the two graphs could be different, but then from the text it doesn't seem that this is the case. Can the authors please clarify this?
2. Have the authors considered analysing different choices for the readout function to check for possible result improvements?
3. Have the authors considered other tasks beyond binary classification? For example a regression-based task like age prediction?
4. Around line 249, if the feature matrix of the structural data is of shape |V_{SC}| x |V_{SC}|, doesn't it mean each node will have repeated features given the symmetric nature of the matrix?
5. Around lines 259/260, I understood that the data is split in 50% train data and 50% validation data. If this is true, can the authors clarify why this was the case? In our field it is more common to have a higher percentage for the train data, and that's why I ask this.
6. Although it is good that the paper contains traditional ML models as baselines (ie, SVM, random forest), why didn't the authors choose the traditional methods mentioned in the Related Work section?
7. Work [59] seems to be the only previous work mentioned by the authors that use self attention. Given the model proposed by the authors highly relies on Transformers, shouldn't this work be included in the baselines comparison?
8. Do the authors have any hypothesis on why the results of the two-site brings worse results than the two datasets separately? Could it be the way the two sites are separated when splitting the data in train/val sets? (I am assuming more data usually means better generability)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper highlights some limitations, but no potential negative societal impact of work seems to have been mentioned. The fact that the model uses medical data, and that it can be used to predict depression, seems to me to be potentially be used in negative ways with direct impacts in someone's life?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, thank you for your recognition of our work, but for the disclosure of data, it does need to be followed up by our discussion with multiple parties (including but not limited to the hospital side).
Answers to weaknesses:
1. Maybe we did ill-define this issue, and the lack of clarity in the description. The reason why we say this is the first work is that our work focuses on indirect interactions of regional heterogeneous modalities rather than direct interactions, and we really did not find any work done in this field. As for the later examples of work [58, 59], they did do a lot of meaningful research on multimodal fusion, but they are still essentially direct fusion of multimodal information, so we define them as simple pattern (we define direct interaction as a simple pattern, not that these works are simple).
Secondly, "one-to-one mapping" just describes the fact that there is no "one-to-one mapping" between function and structure, not used to describe any previous work.
2. The "message" is simply understood as the semantic information within the modality (like node's feature or the graph's structure information, etc.).
Secondly, we do not say that the information from different modalities is detrimental to the final task, but we emphasise that our research focuses on heterogeneity research, and the indirect interaction between different modalities, so the information from different modalities is still interacting in essence, but we have converted the direct to indirect.
Thirdly, to address this "bad thing" problem, because of the heterogeneity problem, we conceived the framework of indirect interaction, and finally verified the role of indirect interaction through experiments, so we believe that direct interaction is a bad thing compared to indirect interaction, so "forbidden" interactions mean that direct interactions between different modes are removed from our model. We did the corresponding ablation experiment in the paper, which may have been misunderstood because we did not describe it clearly. As in the ablation experiment, w/o Trans-Bottlenecks is to remove the fusion bottleneck, and use a standard Transformer to directly interact with the two modal information (which is what you want for ablation analysis).
3. We can understand your concern. Our experimental setup is indeed using a 10-fold CV procedure with only the training and validation sets divided. The reason why we didn't divide the validation set and test set separately is that we researched a lot of articles and found that a lot of brain science related studies used cross-validation (refer to 10.1016/j.media.2021.102082, 10.1109/TNNLS.2022.3154755, 10.1016/j.media.2022.102550, 10.1109/TMI.2022.3222093), so we also adopted this approach. In our opinion, this 10-fold CV procedure has conducted experiments on multiple divisions and can better overcome the effects of division randomness.
Answers to questions:
1. Structure and function are really two different graphs, they are two graphs with no connection at all, two modal information. The same atlas refers to the fact that we used the same atlas brain partitioning template, so that the two modal graphs have the same number of nodes and nothing else is the same.
2. Thanks to your suggestion, we now add this experiment in the "global" rebuttal pdf file (refer to Tab. 1 and Tab. 2 in the "global" rebuttal pdf).
3. It is true that we did not consider other types of tasks in the paper because our current study is based on current clinical needs, and in the future if there is a need for other types of tasks, we believe that our model can do a good job of migrating as well.
4. Because SC essentially represents the number of fibre connections each brain region has to other brain regions, the feature matrix is an is of shape |V_{SC}| x |V_{SC}|. There may exist some of the same values, but they have different meanings.
5. As we explained in detail in the answer to the third weakness above. In each fold, 90% of the data is used for training and 10% of the data is used for validation and testing.
6. In this paper, we focus more on machine learning (including deep learning) methods, so we do not compare with SNF or ICA, which are traditional methods, in our comparison experiments.
7. The fact that work [59] is based on brain imaging, whereas our study is based on brain network graphs, there is a fundamental difference between the image and graph. So we didn't include it in the comparison experiments (as it is mentioned in the paper in line 273-274 that our comparison methods are selected to be the research methods that are directly on SC and FC). The second thing is that work [59] does not disclose their code. We eventually added AL-NEGAT as a comparison method, which is an attentional-based approach.
8. Since multi-site data are collected from different scanners with different acquisition parameters, non-neural inter-site variability may mask inter-group differences. Although multi-site increases the data but the overall data distribution is more complex, so it leads to performance degradation. In the experiments of two-site, we first divide the two datasets individually (90% train, 10% validation), and then combine them into a complete two-site dataset. Some multi-site work also leads to performance degradation, such as 10.1016/j.media.2021.102279, 10.1016/j.media.2020.101765.
Answers to limitations:
Regarding the negative social impacts mentioned, firstly, our research is currently limited to scientific studies and has not been put to industrial use. Secondly, our data were collected with the consent of the subjects who were informed of the purpose for which the samples were collected. We also observe ethical and moral principles, and all of our samples were collected with personally identifiable information hidden. Overall, we can guarantee that the present exercise will not bring about any negative social impact.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the time spent answering my concerns point by point. I have some follow-up comments and questions, which I'll do point by point as well.
Regarding the weaknesses:
1. It seems to me what the authors are trying to defend is still an over-statement. In the rebuttal to this point, the authors try to explain something that we agreed might seem to be ill-defined, by using two concepts (direct interactions and indirect interactions) without exactly explaining them in detail. From what I understand as indirect and direct interactions, I still don't see how how previous works ([58, 59], and the ones I mentioned) cannot be seen as exploiting indirect interactions. We are talking about previous works that use attention, and leverage complex interactions using GNNs. For GNNs in particular, they are able to exploit various hops of information, and therefore I cannot see how these previous works are exploring only direct connections or "one-to-one mappings" between function and structure.
2. I think I understand the ablation analysis better now, thanks a lot for the clarification. I see now that the ablation "w/o Trans-Bottleneck" actually corresponds to the ablation analysis I suggested, so I apologise for my confusion. I also see how this result tries to show that, on average (as per table 2), having that fusion bottleneck "forbidding" direct interactions seems to show that direct interactions are not as good as indirect interactions. In this case, the key weakness of this point is a bit different from what I initially understood. This specific ablation result does not seem to be significantly different (in table 2), with averages sometimes very close to each other and with big/overlapping standard deviations. I'd say therefore that what seems to be happening here is that the increase in averaged performance actually happens because we have a significant more complex model (two transformers) instead of just one transformer, and that the overlapping standard deviations seem to show that results are so similar that the proposed model in the paper might not be worth double the cost to bring a marginal improvement. Thus, this experiment actually seems to show that direct connections are so important that a single transformer is able to leverage enough information to get similar performance at half the computation cost of the fusion bottleneck that the authors propose. This also seems to show to me that the sentence "Due to the issue of regional heterogeneity between SC and FC, it is obviously not feasible to pass messages directly between two modalities" is not properly defended as the wording is too strong (ie, *obviously* not feasible) for the results presented. In this rebuttal point the authors seem to agree with me that information from different modalities can be useful, they just decided to focus on this more indirect connections given their results. However, as I try to defend here, this doesn't seem that *obvious*.
3. The authors seem to agree with me that their CV approach uses the same split for the validation and test set. In this case, then, this is still a key weakness and results might be overly-optimistic. I am well aware, as the authors point out, that a lot of ML literature has a serious issue with data leakage and wrong evaluation processes. However, for a conference like neurips, we'd expect good/accepted papers to be aware that if a training procedure uses a validation set either for hyperparameter search or to know when to stop, an independent test set is required to report final performance. I agree 100% with the authors that a CV procedure is a good - and actually necessary - procedure to report results. The main issue, though, is that the test sets of this CV procedure match the validation set used to train the neural networks at each split, and that is wrong.
With regards to my questions, I just have follow-up questions/comments to two of them:
1. If the graphs of the two modalities have the same number of nodes, then the authors might want to consider updating figure 1 for increased readability. As I said, it seems to show that the two modalities might have a different number of nodes.
6. Those other methods that the authors mention in the Related Work section, like SNF and ICA, seem to be important in the context of multimodal brain fusions. In this sense, I do not understand why the authors preferred to focus only on more traditional ML models. SVM and random forests are traditional ML models that are not based on deep learning, and therefore contribute to show the relevance of this work, even though they didn't seem to have been relevant in the context of the Related Work section. Therefore, it seems even more important that the authors use SNF and ICA as baselines to show the relevance of their work in the context of the literature that the authors present in the Related Work section.
---
Reply to Comment 1.1.1:
Comment: Answers to weaknesses:
1. We apologise for not explaining direct and indirect interactions in detail in our previous rebuttal, and therefore we re-explain both concepts here.
Regarding direct interaction, we mean two modalities (i.e., two graphs) whose features/embeddings are put together to do some computation, whether it is a weighted sum of the feature matrices or a potential embedding computation using attentions on them.
We define indirect interaction as the opposite of direct interaction, where the features/embeddings of the two graphs are not put together to do some computation, but they are put together with the bottleneck respectively, i.e., the bottleneck serves as a bridge for the interaction of the two graphs.
Starting from the above definition, the previous work [58][59] and what you mentioned, they both put the features/embeddings of the two graphs together to do certain computations, and therefore fall into this category of direct interaction.
Regarding your statement that GNNs can pass information in multiple hops, you may have misunderstood the concept of direct and indirect interactions, the multi-hop passing of information in GNNs is within a graph and between multiple nodes, whereas the direct and indirect interactions that we have defined are for two graphs, which is a completely different thing.
2. In response to the second weakness, firstly, thank you for your detailed explanation, our previous wording does need to be reconsidered. For your question about the effect not being obvious, we think that indirect interaction does bring some performance improvement from the experimental results. Our starting point at the very beginning of this paper was also to explore the role of indirect interactions, which is an unexplored issue, and thus we focused more on indirect interactions. But as you said (and as we agreed in our previous rebuttal), direct interaction must be beneficial for multimodal tasks, and so in future work we will combine direct and indirect interaction for further exploration.
3. In response to the third weakness, thank you for your suggestion, we recognise your statement and therefore we have redesigned our experimental evaluation process. We still use 10-fold cross validation, but in each fold, we divide the dataset into 8:1:1 (80% for train, 10% for validation and 10% for test). Finally, the average results of 10-fold cross-validation are counted. Due to the current time constraints, we only give the new comparison experimental results of the HCP dataset in the rebuttal for the time being, and the other new results will be given in a future version of the paper.
| | | **HCP Dataset** | | | | |
|:-----------------:|:------------:|:---------------:|:--------------:|:--------------:|:--------------:|:------------:|
| **Method** | **Modality** | **ACC** | **SEN** | **SPE** | **F1** | **AUC** |
| **FGDN** | fMRI | 64\.87±2\.76 | 59\.28±12\.18 | 69\.64±9\.90 | 60\.81±5\.05 | 64\.46±2\.99 |
| **FGDN** | sMRI | 60\.26±3\.25 | 51\.25±18\.56 | 68\.00±13\.33 | 52\.48±12\.16 | 61\.62±3\.94 |
| **BrainGNN** | fMRI | 63\.62±4\.12 | 64\.52±6\.52 | 61\.87±11\.21 | 60\.32±4\.98 | 61\.78±5\.97 |
| **BrainGNN** | sMRI | 64\.01±3\.99 | 65\.00±5\.99 | 62\.24±9\.76 | 61\.53±4\.74 | 63\.65±5\.80 |
| **SVM** | fMRI, sMRI | 62\.27±3\.25 | 53\.25±18\.56 | 70\.00±13\.33 | 54\.48±12\.16 | 61\.62±3\.94 |
| **Random Forest** | fMRI, sMRI | 66\.51±2\.67 | 52\.41±6\.32 | 78\.57±4\.79 | 58\.89±4\.38 | 65\.49±2\.76 |
| **SNF** | fMRI, sMRI | 53\.89±4\.96 | 51\.56±10\.21 | 55\.21±9\.22 | 52\.14±10\.83 | 59\.25±6\.38 |
| **MGCN** | fMRI, sMRI | 66\.85±4\.95 | 60\.33±9\.20 | 74\.28±6\.64 | 63\.13±6\.39 | 67\.31±5\.09 |
| **GBDM** | fMRI, sMRI | 65\.05±5\.23 | 61\.80±6\.22 | 67\.55±9\.45 | 62\.61±4\.65 | 64\.68±5\.02 |
| **MMGNN** | fMRI, sMRI | 67\.46±4\.27 | 53\.03±14\.48 | 79\.82±8\.10 | 58\.76±11\.65 | 66\.42±4\.81 |
| **AL\-NEGAT** | fMRI, sMRI | 68\.19±3\.11 | 64\.71±7\.15 | 72\.18±4\.75 | 66\.38±5\.12 | 68\.94±4\.23 |
| **Ours** | fMRI, sMRI | **71\.51±4\.53** | **66\.76±10\.05** | **80\.82±4\.67** | **69\.01±6\.92** | **72\.79±4\.86** |
Answers to questions:
1. Thank you for your careful observation, we will update figure 1 in the new version of the paper.
2. Thanks to your suggestion, we added SNF as the baseline method in the new comparison experiment, as for ICA, we failed to find a public implementation of it, so it was not included in the comparison experiment. | Summary: The author proposes a novel regional heterogeneous multimodal brain networks fusion strategy to alleviate the issue of regional heterogeneity of multimodal brain networks. This strategy uses a graph convolutional network for the extraction of initial features of nodes (brain region from AAL atlas) and a transformer-based fusion bottleneck module for the fusion of structural connectivity matrix and functional connectivity matrix. This fusion strategy achieved the best results on the tasks of gender classification and depression classification.
Strengths: 1. This work is well written and the figure is clear, making it easy for the reader to understand what has been done.
2. This work is of great significance. How the structural and functional networks of the brain fusion have been an open question because the understanding of how the anatomical constraint is related to the elaborate functions is nevertheless fragmentary. Addressing the fusion of brain structure and function from a deep learning perspective can help facilitate understanding of the relationship between structure and function.
Weaknesses: 1. The authors state "alleviate the regional heterogeneity between multimodal brain networks". This is a very ambitious question, and the high classification accuracy alone does not account for the alleviation of inter-regional heterogeneity. The authors should do detailed analysis experiments based on brain regions on the basis of high accuracy to explore which brain regions are non-heterogeneous between modalities, which brain regions are heterogeneous between modalities, and whether removing these heterogeneous brain regions can achieve better classification results.
2. The authors state "they were inspired by MBT, so how does your work differ from MBT"? It seems that the author's thinking is to not allow interaction of tokens between structure and function, why is that, shouldn't structure and function be meant to interact with each other? The authors say that it effectively improves the performance of the model (line 189), but I can't see from the experiments that there are results to prove this (Transformer with bottleneck token rather than w/o Trans-Bottleneck). The authors do not compare it with MBT's approach to prove the effectiveness of their proposed method.
3. The authors state in line 257 that the initial features of the function (Xfc) are averaged time series, but the Xfc shown in Figure 1 is the function connectivity matrix. These two points are contradictory. And, the initial features of the structure are the structural connectivity matrix, so why should the initial features of the function be the time series of fMRI instead of the functional connectivity matrix?
4. Are the Final Bottlenecks obtained based on Zb only? If so, why not cascade Zsc, Zb, Zfc as inputs to the MPL layer? The authors need to do ablation studies to prove that Zb-based classification is the most accurate. What does Nb in Equation 4 mean?
5. How is the threshold taken for the connectivity matrix to the adjacency matrix for the two modes? Firstly, the authors didn't write how much the threshold is taken. Secondly, the choice of the threshold for the structural and functional connectivity matrices has a big impact on the downstream task, it is suggested to do ablation studies to show how you take the threshold.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See Weaknesses for details.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: See Weaknesses for details
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Answers to weaknesses:
1. For the first weakness, we can understand your concern, and we have thought about it in the same way, but we have not come up with a good way to reflect the regional heterogeneity of brain networks at this time. Your suggestion of exploratory experiments based on brain regions was something we had considered, but since the number of brain regions present in each of our graphs is 90, it would require 2^90 experiments if we were to conduct exploratory experiments on brain regions, which is clearly unrealistic. So for now, we are still only able to measure the model in terms of classification performance on the downstream task.
2. For the second weakness, since this work is investigating the regional heterogeneity between SC and FC, we believe that direct interaction will not achieve better results due to misaligned feature embeddings. So this indirect interaction of fusion bottlenecks is proposed.
The focus of our approach is that structure and function do not interact directly, but it does not mean that they do not interact. We adopt an indirect interaction approach, which is Figure 3 in the paper, where the structure and function interact indirectly with each other through fusion bottleneck.
Secondly, about the performance improvement of fusion bottleneck on the model. Actually, there is an experimental proof in the paper, maybe we didn't describe it clearly enough. The experimental setup about w/o Trans-Bottleneck is actually mentioned in the paper in line 308, which is to use the standard Transformer to directly compute the tokens of the two modalities, so for the third and fourth rows in Table 2, this ablation experiment actually shows the role of the Fusion bottleneck (Fig. 3) (i.e., the comparison of the difference between direct and indirect interactions).
Finally, about not comparing with MBT, it is because MBT itself is in the field of computer vision, doing audio-video modal fusion, so it can't be used directly in our scenarios, so I didn't include it in the comparison experiments to compare the methods (because it is mentioned in the paper in line 273-274 that our comparison methods are selected to be the research methods that are directly on SC and FC).
3. For the third weakness, this one is indeed our writing error, what is shown in Figure 1 is the correct one, using the functional connectivity matrix as a feature.
4. For the fourth weakness, thank you for this good suggestion, we are attaching the additional experiments to the rebuttal.
We apologize that Nb stands for the number of Bottlenecks, which is not specifically labelled here because it was mentioned earlier.
5. For the fifth weakness, regarding the value of the threshold, in fact, we have done an ablation experiment on the threshold before, through which we finally selected the optimal value as the threshold for the subsequent experiments. However, we did not put the results of this ablation experiment into the paper because we thought that this was not the focus of our research, and now we give the results in the "global" rebuttal pdf file (refer to Fig. 1 in "global" rebuttal pdf).
| Input | HCP Dataset | | | | |
|------------------------|--------------|--------------|--------------|--------------|--------------|
| | Acc | Sen | Spe | F1 | Auc |
| **Z\_sc \+ Z\_fc** | 77\.62±3\.69 | 74\.96±7\.70 | **81\.79±7\.40** | 76\.30±4\.33 | 78\.38±3\.68 |
| **Z\_sc \+ Z\_b \+ Z\_fc** | 77\.57±4\.06 | 74\.77±9\.26 | 80\.00±9\.03 | 75\.34±4\.50 | 77\.38±4\.03 |
| **Only Z\_b \(ours\)** | **78\.63±4\.36** | **75\.59±6\.75** | 81\.25±6\.04 | **76\.49±4\.91** | **78\.42±4\.38** |
| Input | Two\-site Dataset | | | | |
|------------------------|-------------------|--------------|--------------|--------------|--------------|
| | Acc | Sen | Spe | F1 | Auc |
| **Z\_sc \+ Z\_fc** | 75\.73±1\.83 | 69\.01±4\.87 | **82\.39±3\.01** | 72\.55±3\.04 | 75\.70±1\.82 |
| **Z\_sc \+ Z\_b \+ Z\_fc** | 75\.29±1\.89 | 69\.09±3\.96 | 81\.50±4\.89 | 72\.34±2\.58 | 75\.29±1\.89 |
| **Only Z\_b \(ours\)** | **78\.48±1\.43** | **76\.20±4\.06** | 80\.72±3\.60 | **77\.35±1\.97** | **78\.46±1\.43** |
---
Rebuttal 2:
Comment: Thank the authors for the rebuttal. All my raised questions have been properly solved. Hence, I would like to revise the rating to borderline accept.
---
Rebuttal Comment 2.1:
Comment: I need to ask the reviewer A4mz: if all their questions raised have been properly solved, why isn't the reviewer giving a higher scorer? Even more because it's asked that reviewers use the "borderline" decisions "sparingly". I believe this is important for transparency with the authors, for an easier decision by the other reviewers and area chairs. | Rebuttal 1:
Rebuttal: In response to some reviewers and ethics reviewer's questions about the negative social impact, we would like to explain the following.
First, regarding the dataset collection process, the Human Connectome Project (HCP) dataset, as a publicly available dataset that has been used in numerous previous studies, is undoubtedly not ethically questionable. It is true that the hospital dataset is held in collaboration with our partner hospitals and is not yet publicly available, but the data is collected with the consent of the subjects who are clearly informed of the purpose of the sample collection, and all identifying information about the sample is hidden in the hospital dataset. Therefore, it does not adversely affect any individual, so there are no ethical or moral issues.
Secondly, with regard to research on depression, the work we have done so far has been limited to the scientific research stage and has not been put to clinical application. We can guarantee that all the research conducted in this work will not have any negative social impact.
Thirdly, regarding the ethics reviewer's question about the dataset being too small and the use of the depression dataset. In fact, this is really the only dataset that available to us, because there are great difficulties in data collection in this field, and also, processing the data is very costly, so it is really small for the time being. For the question of using depression datasets, the previous two points have basically answered the question without any moral or ethical issues, and it is currently limited to scientific research without any negative social impacts.
(The explanation above will be appended to the full paper of the new version.)
In this global rebuttal we accompany the results of some supplementary experiments (a pdf file).
Pdf: /pdf/d5a73738ca8bfb2d64b5d78c4d773f44ebeba92e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The article discusses the use of multimodal fusion as a research technique in neuroscience to extract complementary information from multiple modalities. Since previous research has neglected the regional heterogeneity between structural connectivity (SC) and functional connectivity (FC) and used inefficient ways of multimodal fusion. To address this issue, the authors proposed a novel Regional Heterogeneous multimodal Brain networks Fusion Strategy (RH-BrainFS), which uses a brain subgraph networks module to extract regional characteristics and a transformer-based fusion bottleneck module to alleviate the regional heterogeneity between SC and FC. The proposed method outperforms several state-of-the-art methods in various neuroscience tasks and is the first work to propose a solution to the issue of structural-functional modal regional heterogeneity.
Strengths: First the research problem is significant. The structural-functional modal regional heterogeneity is a recent popular research topic. The authors first propose a method which could work well in this situation. The novelty of the paper is sufficient. To alleviate the issue of regional heterogeneity of multimodal brain networks, the authors propose a novel Regional Heterogeneous multimodal Brain networks Fusion Strategy (RH-BrainFS), using BrainSubGNN module and Trans-Bottleneck module to fuse regional heterogeneous multimodal brain networks for neuroscience tasks.The experiments are sufficient, the authors conduct the experiment on two main-stream MRI brain benchmarks. Extensive experiments demonstrate the effectiveness of RH-BrainFS in multimodal brain networks fusion tasks on depression classification and gender classification datasets.
Weaknesses: 1. authors have not explained in detail why using "forbidden Interaction" in the Fusion bottlenecks. Authors are suggested to provide more motivations and explanation of this part.
2. authors have not provide the detailed speed and time-consumption comparison with other methods. The proposed method seems time-consuming.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. the authors use thresholding to get Asc and Afc from Xsc and Xfc, how the selection of thresholding effect the final results? how sensitive the results are.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: see the weakness part
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Answers to weaknesses:
1. For the first weakness, "forbidden interactions" means that we remove direct interactions between SC and FC (just as in the standard Transformer model, a direct calculation of attention scores for different modal markers is a direct interaction). The initial motivation for this was that we believed that there was regional heterogeneity between SC and FC, and thus we believed that their features or potential embeddings would be misaligned, and thus direct interaction would lead to poorer results. Therefore, we proposed indirect interactions for fusion bottlenecks, and later also experimentally verified the effectiveness of indirect interactions (compare the ablation experiments in the third and fourth rows of Table 2). Therefore, we removed the direct interaction between SC and FC (i.e. direct interaction is forbidden) and adopted the indirect interaction of the fusion bottleneck.
2. For the second weakness, we supplemented the time-consumption experiment (record the average training time for 100 epoches and the average time for 1 inference on the Two-site dataset), and as you would expect, our method incurred a greater time overhead.
To explain such results, we believe that firstly MGCN, GBDM and MMGNN are more focused on researching on processing of data features, so common GNN frameworks (which are simpler compared to Transformer or Attention) are used in the network structure, so they have better good computational efficiency.
Secondly, we consider AL-NEGAT to be an attentional-based approach that has similarities with our Transformer-based, so AL-NEGAT generates a time overhead that belongs to the same order of magnitude as Ours.
In response to such results, we admit that our method does underperform in terms of computational efficiency, but falls within an acceptable range, after all, our model performance is better improved.
Answers to questions:
1. For the first question, regarding the value of the threshold, in fact, we have done an ablation experiment on the threshold before, through which we finally selected the optimal value as the threshold for the subsequent experiments. However, we did not put the results of this ablation experiment into the paper because we thought that this was not the focus of our research, and now we give the results in the "global" rebuttal pdf file (refer to Fig. 1 in "global" rebuttal pdf).
| **Method** | **Train Tine \(s / 100 epoches\)** | **Inference Time Cost \(s / 1 inference\)** |
|:-------------:|:-----------------------------------:|:--------------------------------------------:|
| **MGCN** | 23\.875 | 0\.017 |
| **GBDM** | 20\.388 | 0\.013 |
| **MMGNN** | 23\.905 | 0\.016 |
| **AL\-NEGAT** | 31\.573 | 0\.023 |
| **Ours** | 33\.875 | 0\.025 |
---
Rebuttal Comment 1.1:
Comment: After the rebuttal, authors well addressed my concern. So I raised my rate to 'accept'.
---
Reply to Comment 1.1.1:
Comment: Thank you for recognising our work and raising your score. | null | null | null | null | null | null |
AND: Adversarial Neural Degradation for Learning Blind Image Super-Resolution | Accept (poster) | Summary: This paper proposes a method of AND to learn neural degradation for the task of blind image super-resolution. The core idea is learning to degrade HR images by neural networks, trying to synthesize real-world degradations. Based on the synthesized data, a restoration model can be well trained. The proposed AND model has a unique advantage over the current state of the art in that it can generalize much better to unseen degradation variants. Experimental studies on public datasets show its effectiveness.
Strengths: + The paper is generally well written, and it is easy to follow.
+ The paper conducts extensive studies including ablation studies, showing effective results.
Weaknesses: There are several issues in this paper, including:
- The idea of learning degradation to better learn blind image super-resolution is not new, and has been studied in existing works, like "To learn image super-resolution, use a GAN to learn how to do image degradation first" (ECCV2018). The contribution can hardly be important.
- The claim "But regrettably, adversarial learning has not been applied to neural network models of signal restoration" is not valid. As a fact, there have been many works in this direction. For example, "Robust Real-World Image Super-Resolution against Adversarial Attacks"(ACMMM2021). There are more that are not given as examples.
- The experimental results need more clarification and explanations.
- Minor: L238, "By this way", L278, "So our method need an ..."
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1, the qualitative results are not promising and are still vague, compared with other methods. This needs more explanations.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors did not address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** The idea of learning degradation to better learn blind image super-resolution is not new, and has been studied in existing works, like "To learn image super-resolution, use a GAN to learn how to do image degradation first" (ECCV2018).
**A1:** You seem to misunderstand the main contribution of our paper. It may help for us to clarify further.
The proposed neural degradation prior is an untrained neural network, which does not learn to imitate any specific degradation effects, hence not requiring degraded LR training images at all. The untrained network is able to work as an image degradation prior, because it is designed to satisfy the following two common properties of a wide range of image degradations. First, the degradation effects can be modeled by CNN operations. Second, most degradations of interest can be considered as a small deviation from the identity transformation.
On the other hand, the ECCV2018 paper uses a trained network to imitate the degradation effects using a given LR image set; the learnt degradation is then used to train the super-resolution model. This is a totally different methodology from ours, and the methodology difference leads to the inferior generalization ability of the former to the latter. Their method would fail easily if the degradation of the chosen LR training images mismatches the real degradation at the inference stage, while our method can generalize much better to unseen degradation variants with the untrained neural degradation prior.
We would like to show the generalization ability gap by experiments. However, the ECCV2018 paper focuses only on facial image SR, which is by itself a weakness, so we cannot compare it fairly with our method. Luckily, there is another paper published in ECCV2022, named "From face to natural image: Learning real degradation for blind image super-resolution". This ECCV2022 paper, like the ECCV2018 paper, also uses GAN to learn degradation from an LR image set but it can be applied to natural image SR task, so we can compare its performance with our method. The name of the ECCV2022 method is ReDegNet, and its performance is already shown in Table 1 of our paper (page 8) and in Table 1 of our supplementary material (page 4). The quantitative comparison proves that our method does generalize better on all four real-world SR datasets.
***
**Q2:** The claim "But regrettably, adversarial learning has not been applied to neural network models of signal restoration" is not valid. As a fact, there have been many works in this direction. For example, "Robust Real-World Image Super-Resolution against Adversarial Attacks" (ACMMM2021).
**A2:** Thank you for pointing this out. We will tune down our statement accordingly, and discuss the contribution of the ACMMM2021 paper in the revised version of our paper. However, here we would like to emphasize the difference between the ACMMM2021 paper and our paper.
The ACMMM2021 paper borrows the earlier research in adversarial training for image classification tasks. But it does not account for the differences between the classification and restoration tasks. In the previous research on adversarial training for image classification tasks, Out-of-Distribution (OOD) perturbations are introduced through the deliberate efforts of malicious attackers. This approach utilizes pixelwise additive high-frequency noise as a concealed and effective perturbation attack. Note that the adversarial attack is very different from the type of signal degradations in restoration tasks. Specifically, the perturbations in super-resolution tasks encompass a mixture of blur, noise, and nonlinear transformations. As a result, for restoration tasks noise no longer predominantly influences the OOD perturbation as in classification tasks. Therefore, the proposed neural degradation prior is a more suitable perturbation model for real-world image restoration.
The experimental settings of the ACMMM2021 paper and our paper also differ due to different understandings of real-world image SR tasks. In the ACMMM2021 paper's experiment, real-world LR images were not used directly as input for the SR model. Instead, they were first manipulated by a malicious attacker, which may not represent the most crucial scenario for real-world image SR. Conversely, our approach utilizes real-world LR images directly as input and demonstrates promising performance on real-world image SR datasets.
***
**Q3:** The qualitative results are more vague, compared with other methods.
**A3:** Yes, your observation is accurate. However, we must emphasize that the enhanced sharpness in the image outputs from other methods results from an artifact known as "watercolor-like artifacts" or "painterly artifacts". This artifact can occur when SR algorithms, especially GAN-based methods, generate images that exhibit characteristics similar to brush strokes or painterly textures, rather than natural or realistic details. The boundaries between two brush strokes would be very sharp, even sharper than the HR images. It is considered an artifact because it deviates from the original intention of SR, which is to enhance the detail and clarity of the image without introducing artistic or stylized characteristics. Compared to other methods, our SR output more closely resembles a natural HR image.
***
**Q4:** The authors did not address the limitations.
**A4:** We did discuss the limitations of our method in the Limitations section of our supplementary material (page 3).
***
Since there are several misunderstandings regarding our paper, we would greatly appreciate it if you could review it again. If you have any further questions, please don't hesitate to reach out. We are more than willing to provide clarifications both within our discussion and in the paper itself.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks a lot for providing the rebuttal. I appreciate the effort, but it fails to resolve my concerns. I think I did not misunderstand the paper. The paper indeed shares the core idea with the ECCV2018 paper, regardless of the trivial aspects. Some of my other concerns are neither resolved by the rebuttal. My initial rating will be kept.
---
Rebuttal 2:
Title: Followup Response to Reviewer BEQG
Comment: Dear Reviewer BEQG,
We would like to thank you again for the valuable time you devoted to reviewing our paper. We believe that we have addressed your concerns. Since the end of discussion period is getting close and we have not heard back from you yet, we would appreciate it if you kindly let us know of any other concerns you may have, and if we can be of any further assistance in clarifying them.
Thank you once again for your contribution to our paper's development.
Authors | Summary: This paper proposes an adversarial approach for blind image super-resolution. Instead of using combinations of synthetic degradations (e.g., Gaussian blur, JPEG compression), this paper proposes to use a degradation network to construct LR patches from HR patches during training. The degradation network is optimized with a constraint that the network parameters fall within a local region of that of an identity mapping. The proposed approach outperforms existing works on common datasets.
Strengths: 1. The proposed ANDNet and ANDGAN achieve promising performance on multiple common datasets, outperforming existing works.
2. The idea of using a degradation network is interesting.
Weaknesses: 1. The working mechanism of the proposed approach is not clear. While the empirical results show the effectiveness of this approach, more explanation of the mechanism is necessary. Specifically,
**a)** Without additional constraint, why do the degradation network and restoration network correspond to degradation and restoration respectively? Is it possible that the degradation network attempts to enhance the HR input and the restoration network degrades back?
**b)** Why does the learned degradation network reveal the real-world degradations? In theory, there could be unlimited possibilities. How does it work?
2. The claim in Ln.118 that `Almost all types of image degradation could find a corresponding operation in a standard convolutional neural network.` is inaccurate. In practice, there are many degradations that cannot be represented by standard CNN operations. For example, the camera shot noise and read noise corrupt the input in the raw image domain, which cannot be represented by common operations. Similarly for multiplicative noise and JPEG compression. While CNN can be used to approximate the degradations, the above statement is over-claimed.
3. The ablation studies (Sec. 5) is confusing. The configuration in Table 2 is difficult to understand. Please explain the ablation studies clearly. Specifically,
**a)** What is classical SR training?
**b)** What is synthetic data augmentation?
**c)** What is severe random style shift? (The difference to b) is the identity initialization, why does it related to style?)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: While the proposed approach achieves promising performance, the underlying mechanism and motivation are unclear. It is advised to dress the concerns in the weakness section. I would be happy to adjust the rating if the aforementioned concerns are resolved.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors addressed the limitations in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** Why do the degradation network and restoration network correspond to degradation and restoration respectively? Is it possible that the degradation network attempts to enhance the HR input and the restoration network degrades back?
**A1:** As demonstrated in Algorithm 1 of our supplementary material, the degradation network and restoration network are optimized alternatively. With a temporarily fixed restoration network, the degradation network continually seeks to increase the loss, intensifying image degradation. Conversely, with a temporarily fixed degradation network, the restoration network consistently endeavors to minimize the loss, enhancing image restoration quality. Therefore, we do not believe that the scenario where the degradation network enhances while the restoration network degrades would result in a stable convergence point. While this situation might appear feasible in theory, in practice, even a minor random disturbance could quickly propel the entire system out of that situation.
***
**Q2:** Why does the learned degradation network reveal the real-world degradations? In theory, there could be unlimited possibilities. How does it work?
**A2:** We do not require any perturbed degradation network to correspond exactly with a real-world degradation. This is neither necessary nor feasible for our method. We only need to guarantee that the entire feasible region of the degradation network could cover most cases of real-world degradation. In other words, we only need to guarantee that most instances of real-world degradation could be represented by a degradation network. If we can establish a solid lower performance bound for the entire feasible region of the degradation network through adversarial training, this lower bound also applies to most real-world degradation cases, thus achieving robust restoration.
***
**Q3:** The claim that "Almost all types of image degradation could find a corresponding standard CNN operation" is inaccurate. There are many degradations that cannot be represented by standard CNN operations, such as noise in the raw image domain, multiplicative noise, and JPEG compression.
**A3:** Thank you for pointing this out. We will revise the claim to make it more rigorous. We completely agree with your point that the degradations you mentioned cannot be accurately represented by standard CNN operations. In fact, we addressed similar issues in the Limitations section of our supplementary materials. The example of noise in the raw image domain, which you referred to, is similar to the halftoning degradation we discussed. Similarly, the scenario involving multiplicative noise closely resembles a low-light environment. Perhaps we could claim that "Most dominated image degradations in real-world SR tasks could find corresponding standard CNN operations"? If you have any suggestions, please don't hesitate to share with us.
***
**Q4:** The ablation studies is confusing. The configuration in Table 2 is difficult to understand. Please explain the ablation studies clearly.
**A4:** We apologize for any confusion that may have arisen. The ablation studies are intended to investigate the effects of the three major components of our method: adversarial perturbation, neural degradation, and identity initialization. In each ablation setting, we retain only specific components and assess the method's performance. The first column, labeled as "Configuration" in Table 2, is a name and an explanation for a particular ablation setting.
If we do not use adversarial perturbation and neural degradation at all, our method would become a classical SR training method, which assumes that the image degradation model is an ideal bicubic downsampling. Most SR researches are with this setting, such as SRCNN, EDSR and RCAN.
If we retain solely the adversarial perturbation without inducing neural degradation, it implies the utilization of an adversarial noise training method similar to "Generalized real-world super-resolution through adversarial robustness" (ICCVW2021). The perturbation under this setting is additive noise, following most adversarial training researches on image classification tasks.
If we use neural degradation with identity initialization, but without adversarial perturbation, our method is then a SR training with synthetic data augmentation, working like "Designing a practical degradation model for deep blind image super-resolution" (ICCV2021). During model training, the neural degradation would be random sampling near the identity initialization rather than adversarial sampling, and the generated LR patches works like synthetic data augmentations.
If we employ neural degradation with identity initialization, but exclude adversarial perturbation, our approach becomes analogous to SR training with synthetic data augmentation, akin to the method presented in "Designing a practical degradation model for deep blind image super-resolution" (ICCV 2021). Throughout the model training process, neural degradation involves random sampling near an identity mapping, rather than adversarial sampling. Consequently, the generated LR patches function as synthetic data augmentations.
If we only remove the identity initialization from our method, the generated LR patches would no longer be visually similar to the HR patches. While the skeleton of the LR patches would remain unaffected, their color and texture would undergo a dramatic change, as the mapping would no longer be an identity mapping. This phenomenon is referred to as "severe random style shift" in our experiments.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response. While the proposed method shows decent performance, it is non-trivial why this adversarial approach would converge to real-world degradations. Specifically, since there are unlimited ways to degrade an image, why would it correspond to real-world degradations? Given that this is a NeurIPS paper, it would be good to have more analysis and insights for this.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer Qwb9
Comment: Thank you for your response regarding the correspondence between the unlimited degradations that our method could generate and the real-world degradations. We addressed this question in our earlier rebuttal (Q2 and A2). It may help for us to clarify further.
The degradations that our method can generate indeed have unlimited possibilities, forming an uncountably infinite set, denoted by $\mathbb{A}$. Similarly, the real-world degradations also form an uncountably infinite set, denoted by $\mathbb{B}$. It's evident that $\mathbb{A} \supset \mathbb{B}$.
The adversarial training in our method solves a minmax problem on set $\mathbb{A}$, specifically, L = minmax (over set $\mathbb{A}$) loss. This process decreases the upper loss bound L, of the loss function within set $\mathbb{A}$. The upper loss bound for real-world degradations, denoted by L' = minmax (over set $\mathbb{B}$) loss, is lower than L, which means that the model always performs better on real-world degraded images during inference, than on highly adversarial degraded images during training.
To give readers more concrete feelings for the validity of our adversarial degradation model, we will visualize LR images generated by our methods along with their corresponding kernels. As Reviewer 9QoZ and Reviewer vm11 suggested, we will visually compare them with real-world LR images, also quantitatively compare them in the Feature Frechet Distance.
We hope that we have addressed your concerns, and we would greatly appreciate it if you could adjust your rating of our paper.
---
Rebuttal 2:
Title: Followup Response to Reviewer Qwb9
Comment: Dear Reviewer Qwb9,
As the discussion period draws to a close within the next few hours, we are writing to inquire if your concerns have been successfully resolved. We genuinely hope our efforts might lead to a possible adjustment in the rating.
Best regards,
Authors | Summary: This work proposed a novel blind SR method via adversarial neural degradation. Utilizing the proposed adversarial neural degradation model can generate various nonlinear degradations effects and no supervisions are required. This makes the proposed method can deal with various real SR datasets. Experiments also verify the effectiveness of the proposed method.
Strengths: 1. Utilizing adversarial neural degradation model for degradation synthesis is novel.
2. The performance is promising.
Weaknesses: 1. In table 1, the authors are expected to provide the SR results with full supervisions by sota sr methods. The proposed method is not required to outperform these methods since they are full-supervised but the proposed method is zero-shot. This can help the readers know the gap between the proposed method and supervised methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** In table 1, the authors are expected to provide the SR results with full supervisions by sota sr methods. The proposed method is not required to outperform these methods since they are full-supervised but the proposed method is zero-shot. This can help the readers know the gap between the proposed method and supervised methods.
**A1:** Thank you for your advice. We will incorporate it into the revised version of our paper. However, obtaining both degraded images and the corresponding latent images (ground truth) is expensive in reality. As a result, the number of images in real-world SR datasets is typically fewer than the number of images in the training set used in classical SR. We are curious if state-of-the-art SR methods trained with full supervision on such a small training set could serve as a reliable upper bound.
---
Rebuttal Comment 1.1:
Comment: Yes, the number of training pairs in real-world SR dataset is fewer. Their testing performance can be treated as an overfitting to the current degradation. Therefore, the testing performance can serve as the upper bound for this specific degradation. The authors claimed that their model can cover most of the real-world degradations. I am curious about the performance of this model when compared with methods with full supervisions. The authors are expected to give this kind of comparison in rebuttal other than promising to give the results in the revision.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer JnSR
Comment: Thank you for your response regarding the comparison with methods with full supervision. We fully agree with your viewpoint that such an experiment can show the upper bounds of a specific degradation. RealSR [1] and DRealSR [2] are two real-world image SR dataset we used in our experiment. In Table 5 (page 12) of the DRealSR paper [2], the authors had already evaluated the performance of models with full supervision using real-world HR/LR pairs. As we adopted ESRGAN as our SR model and didn't alter its architecture, the data from their table can be directly applied here. We reorganized their data into the following table for simplicity. Additionally, we included the performance of DAN [3] in the table to provide you with a more concrete feeling of the gap between our method and the upper bound.
| Method | PSNR on RealSR [1] | PSNR on DRealSR [2] |
| --------------- | ---------------- | ---------------- |
| DAN [3] | 27.80 | 30.59 |
| ANDNet (ours) | 28.47 | 30.97 |
| Full Supervision | 29.15 | 31.92 |
[1] Toward real-world single image super-resolution: A new benchmark and a new model. ICCV 2019
[2] Component divide-and-conquer for real-world image super-resolution. ECCV 2020
[3] Unfolding the alternating optimization for blind super resolution. NeurIPS 2020 | Summary: This paper proposed a new image degradation system for blind image super-resolution tasks. The proposed method uses a neural network system to learn and capture the image degradation operations, combined with a image restoration network, the proposed method achieved satisfactory performance.
Strengths: The proposed method introduced an image degradation system captured by a neural network system. The neural network is intuitive and has shown effectiveness on capture real-world degradation based on experimental results and analysis in the paper.
The presentation of the paper is good and the materials are well organized.
Weaknesses: The paper is focusing on the degradation system and only adapted an existing SR model to be trained with the proposed system. It is not a weakness per see, but a new GAN system designed with the degradation system could have better performance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: + Can the author help visualize the degradation system's filter after the training, and compare it with real-world blur/noise/gamma kernel and effects?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper discussed some real-world use cases. The ablation study provides some insight into the system's different behavior.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** The paper is focusing on the degradation system and only adapted an existing SR model to be trained with the proposed system. It is not a weakness per see, but a new GAN system designed with the degradation system could have better performance.
**A1:** Thank you for your advice. This work is inspired by two observations of image degradations. These two properties can be easily used in degradation model design, but for the time being, we cannot figure out how to use them in restoration model design. That is why we only adapted an existing SR model.
***
**Q2:** Can the author help visualize the degradation system's filter after the training, and compare it with real-world blur/noise/gamma kernel and effects?
**A2:** Thank you for your advice. We will include visualizations of the degradation system's filter in the revised version of our paper. However, for the current model, these visualizations can hardly contain any information. This is because there are several layers in the neural degradation, and each layer is only responsible for a very small portion of the degradation. We cannot simply combine them together due to the presence of non-linear activation layers between them. We intend to retrain the model with a much smaller degradation model, consisting of only two or three layers. This adjustment aims to make the visualizations more informative. If you have any suggestions, please do not hesitate to share them with us. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presented a blind super-resolution algorithm, in which a neural network is used to represent image degradation, followed by the incorporation of adversarial learning mechanisms to study "hard cases". The technique to represent image degradation involves initializing a neural network, ensuring the initialization is an identity mapping, then inducing specific perturbations in the network's filtering and noise, resulting in irregular image degradation. Adversarial learning primarily involves monitoring difficult cases during training and subsequently increasing the penalty, establishing an adversarial approach.
Strengths: The main contribution of the paper is the application of neural networks to express forms of degradation, which is relatively innovative. This approach is more intricate than previous higher-order degradation and random degradation. Adversarial training, to some extent, may be beneficial as it promotes generality, though its innovativeness is not notably outstanding.
Weaknesses: Despite the novelty of using a neural network to represent degradation, there are several concerns.
First, a neural network represent degradation as a distribution, which actually has a range. An ICML paper [R1] pointed out that the range of this distribution could directly impact its performance and generalization abilities. The paper doesn't describe the distribution's range, nor is it clear how this range compares to that of other methods like RealESRGAN or BSRGAN.
[R1] Crafting Training Degradation Distribution for the Accuracy-Generalization Trade-off in Real-World Super-Resolution. ICML 2023.
Secondly, I have concerns about the experiment due to the possible limited range of degradation in the network. It may perform better in specific ranges, as RealESRGAN and BSRGAN may have a much wider range of degradation.
Relatedly, the paper doesn't discuss the network's generalization performance for degradation. If degradation is a range within a distribution, there must be a range, and neither enlarging nor narrowing it is inherently better. The paper does not offer any control for this range.
Lastly, the article's writing is problematic. Many descriptions are long and repetitive, like section 3.1 and 3.2, which could have been resolved with a simple diagram, but the verbose writing makes it difficult to communicate a straightforward idea.
Furthermore, the approach to representing degradation with a neural network seems intended to argue that this design of versatile degradation is more similar to real-life situations, particularly for low-resolution images that could be sampled from mobile ISPs. However, the paper lacks any descriptions or visualizations of low-resolution images and doesn't present experimental results to demonstrate the closeness of their design to actual cases. Hence, I have reservations about the nature of the low-resolution images they've created.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See weakness.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Description is not clear
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for bringing the ICML 2023 paper to our attention, and we will discuss the contribution of the ICML paper in the revised version of our paper. As several of your questions are rooted in this paper, please let us first elaborate on the connection between the ICML paper and our research.
The ICML paper indicates that when the degradation distribution during both training and testing perfectly matches, the SR model exhibits favorable generalization and achieves high performance simultaneously. The binning method is utilized to adjust the joint distribution of the three parameters of a widely-used image degradation model, helping the training degradation distribution to better match the testing distribution.
We fully agree with the indication in the ICML paper regarding the importance of aligning training and testing degradation. However, we would like to emphasize that the ultimate objective should be the alignment of distributions across the set of degradation functions, whereas the ICML paper focuses solely on aligning the distributions of degradation parameters within a given model.
This nuanced yet critical distinction can be elucidated through a simple analogy: Just as people cannot perfectly align a circle with a square by solely adjusting the circle's location and radius, they must also modify its shape. In this analogy, the location and radius correspond to the degradation parameters in the ICML paper, while modifying the shape corresponds to transitioning to a new family of functions, as demonstrated in our paper. In other word, if the blur-noise-JPEG model cannot generate LR image like the testing images, only using the binning method to adjust the distribution of parameters are not able to align the distributions. So our neural degradation model and the ICML paper both attempt to align degradation distributions. However, they focus on distinct and parallel areas.
***
**Q1:** The paper doesn't describe the distribution's range, nor is it clear how this range compares to that of other methods like RealESRGAN or BSRGAN.
**A1:** We have described the feasible region of our model in Section 4.3 of our paper and in Section A.2 of our supplementary material. However, because of the distinct nature of the function family employed in our paper compared to other methods, conducting a quantitative comparison of their ranges poses challenges. In the revised version of our paper, we will present a visual comparison of LR patches.
***
**Q2:** The SR model may only perform better in limited range of degradation.
**A2:** Your assertion is certainly true for any SR model, including ours, as the ICML paper indicates the importance of aligning training and testing degradation. However, we believe that the SR model which performs better within the real-world degradation range is more valuable than one that works better in another range. Quantitative comparison could demonstrate that our method generalizes better across all four real-world SR datasets. The range limitation is inherent to its nature rather than a drawback.
***
**Q3:** The article's writing is problematic. Many descriptions are long and repetitive, like section 3.1 and 3.2.
**A3:** Thank you for your advice. We will revise those two sections to make them more concise.
***
**Q4:** The paper lacks visualizations of LR images.
**A4:** Thank you for your advice. We will include visualizations of LR images in the revised version of our paper.
***
**Q5:** Description of limitations is not clear
**A5:** We did discuss the limitations of our method in the Limitations section of our supplementary material (page 3).
---
Rebuttal Comment 1.1:
Title: Raise My Rating
Comment: After reading the author's rebuttal, I decided to Raise My Rating a bit. Some descriptions from the author make me think this paper is valuable. But I still think it can be done better experimentally.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer 9QoZ
Comment: Thank you for your response and your suggestion for strengthening the experimental part of our paper. We will include the following two new experiments to address your concerns in the revised version of our paper.
First, to assess the performance of our method within a degradation range, we will compare it with other SOTA methods using synthetic HR/LR pairs. While the methods such as RealESRGAN or BSRGAN might work well in a large degradation range with a simple blur-noise-JPEG degradation model, we expect our model to outperform in cases of more complex but more realistic degradations, such as non-aligned double JPEG compression and degradations introduced by image enhancement processes.
Second, we will visualize LR images generated by our methods along with their corresponding kernels. We will visually compare them with real-world LR images, and also quantitatively compare them in the Feature Frechet Distance, as outlined in the ICML paper. | null | null | null | null | null | null |
Soft-Unification in Deep Probabilistic Logic | Accept (poster) | Summary: This work proposes a neural symbolic framework, DeepSoftLog, which extends DeepProbLog with a soft equivalent operation.
The authors also develops four properties that such a soft equivalency operation should hold.
The experimental results demonstrate that DeepSoftLog has better performance than the state-of-the-art baselines.
Strengths: Originality: 2/5
The idea of soft equivalency is not new, which has been adopted by works such as NTP.
Although the authors raise a novel suite of 4 standards that a good soft equivalency function should enjoy, they have not developed deep enough from these definitions, to discuss the theoretical benefits that these properties could bring, such as convergence, relaxation, and learning efficiency.
Quality: 2/5
Pros: The authors provide proof that the soft equivalency function satisfies all four properties.
Cons: In the experiment section, all images are encoded with one-hot embedding, which is quite problematic. This embedding not only creates an unfair comparison against the baselines but is also not meaningful for future references.
Clarity: 3/5
Pros: The writing is pretty clear and the four properties are quite easy to understand.
Cons: The program syntax is hard to understand and not quite user-friendly. For example, p(~x):- q(~y) is not intuitive to understand, as y is not used in the head, and a new, unknown variable is introduced in the context.
Significance: 1/5
The theory part is not developed in-depth enough, which undermines the necessity of why the properties are required.
Further, although the experiment results outperform the SOTA quite a lot, the embedding is synthesized, which also undermines the credibility of the result.
Weaknesses: See "Strength" section.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Can you address what are the benefits that the four properties bring?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for taking the time to read and review our paper. We will address the reviewer's comments, ordered by topic.
> In the experiment section, all images are encoded with one-hot embedding, which is quite problematic.
The authors want to stress that the images are embedded by a neural network and are _not_ one-hot encoded. Perhaps the statement in appendix D1 was confusing, which says that the digits for the (symbolic) ground truth labels are represented as one-hot vectors. We will clarify this sentence.
> Further, although the experiment results outperform the SOTA quite a lot, the embedding is synthesized, which also undermines the credibility of the result.
We stress again that the embeddings in all experiments are learned from data by gradient descent. This is stated e.g. in the introduction when we say that we are "using learnable embeddings".
> "The program syntax is hard to understand and not quite user-friendly."
We adopt the syntax of the Prolog language (with some very minor additions), which we summarize in section 2.1. Prolog is by far the most popular and well-known language to represent logic knowledge and has been extensively used for this purpose. We therefore feel it is a natural choice for this paper, as it builds on the probabilistic and neural extensions of Prolog: ProbLog and DeepProbLog.
> "For example, p(x):- q(y) is not intuitive to understand, as y is not used in the head, and a new, unknown variable is introduced in the context."
We follow the Prolog convention where lowercase symbols are constants (see section 2.1). So `x` and `y` are not variables here. The rule `p(x) :- q(y).` means that if the atom `q(y)` is true, the atom `p(x)` is also true.
> "The theory part is not developed in-depth enough, which undermines the necessity of why the properties are required."
> "Although the authors raise a novel suite of 4 standards that a good soft equivalency function should enjoy, they have not developed deep enough from these definitions, to discuss the theoretical benefits that these properties could bring, such as convergence, relaxation, and learning efficiency."
We address the specific points that are mentioned.
- (Convergence) Giving proper convergence proofs for gradient-based optimization is very challenging, and often even impossible for neural network based systems [1]. We note that the related frameworks in neuro-symbolic AI also do not provide this. We do discuss local minima and gradient flow of our method and compare it with previous fuzzy methods (see e.g. lines 151-158 and 199-203).
- (Relaxation) Soft-unification is indeed a relaxation of the regular (i.e. hard) logic. This is known and has been shown in previous work [2].
- (Learning efficiency) We do make claims about the learning efficiency. Most crucially, our method gives gradients to all embeddings during a training step (disregarding critical points), while previous methods which used the Godel t-norm only give a gradient to at most 2 embeddings. We also demonstrate this empirically, see the ablations in figure 2.
If there are other concrete points where the reviewer feels our theoretic analysis is lacking, we would be happy to address those.
> Can you address what are the benefits that the four properties bring?
We summarize the important benefits. An in-depth discussion can be found in section 3.
- DeepSoftLog optimizes better compared to previous systems (e.g. NTP). This is mostly due to satisfying Def4. It has previously been shown [1] that gradient descent gets stuck in local minima if not all proof paths get updated simultaneously (see lines 151-158).
- Def2 also improves the training stability by disallowing redundancy in the proof paths (see also the ablations in figure 2).
- Soft-unification has so far been only used in fuzzy systems, while DeepSoftLog equips probabilistic semantics. The use of probabilities as opposed to fuzzy values gives performance advantages. Note that this is the only aspect that impacts the computational complexity (see section 7). In summary, the computational complexity is the same as for ProbLog. The trade-off of fuzzy and probabilistic logic between performance and scalability is well-known in the literature.
- Def3 has previously been found to be necessary to make the soft-unification sufficiently expressive [4].
[1]: Swirszcz, Grzegorz, Wojciech Marian Czarnecki, and Razvan Pascanu. "Local minima in training of neural networks." arXiv preprint arXiv:1611.06310 (2016).
[2]: Sessa, Maria I. "Approximate reasoning by similarity-based SLD resolution." Theoretical computer science 275.1-2 (2002): 389-426.
[3]: de Jong, Michiel, and Fei Sha. "Neural theorem provers do not learn rules without exploration." arXiv preprint arXiv:1906.06805 (2019).
[4]: Julián-Iranzo, Pascual, Clemente Rubio-Manzano, and Juan Gallardo-Casero. "Bousi~Prolog: a Prolog extension language for flexible query answering." Electronic Notes in Theoretical Computer Science 248 (2009): 131-147.
---
Rebuttal Comment 1.1:
Comment: I have double check the implementation in the supplementary material, and the embeddings are pictures instead of the one-hot embeddings as claimed in the Appendix D1. I will raise my rating accordingly. | Summary: This paper studies the notion of *soft-unification*, first employed by the Neural Theorem Prover to learn logic rules in an end-to-end differentiable manner. They outline several properties that need to hold for soft-unification to be semantically meaningful and efficiently trainable; properties which previous frameworks fail to satisfy. Consequently, the authors introduce a framework satisfying such properties, which they term *DeepSoftLog*, which is essentially an integration of embeddings (as opposed to symbols) with probabilistic logic programming.
Strengths: - While the idea of soft-unification is certainly not new, the authors tackle the problem in a principled manner by integrating
embeddings with problog, a language for probabilistic logic programming.
- The paper is very well written for the most part, with the authors using examples throughout to help elucidate the exposition.
- While doing a good job with exposition, the authors also managed to stay rigorous throughout the paper, with definition and theorems throughout.
Weaknesses:
- I find the paper to be lacking in terms of experimental evaluation. The experimental section does not offer much that convinces me of the merits of proposed approach:
- The advantage of using probabilistic semantics as opposed to fuzzy semantics is well documented in the literature, which seems to be the main conclusion of section 5.1.
- I'm not really sure what the point of evaluating on MNIST-addition is. The authors seems to be using it to argue for the scalability of their approach. However, I don't see how their approach is more scalable compared to DeepProbLog since they incur an extra branching factor owing soft-unification. They mention the use of approximate inference techniques, but then they have to deal with the pitfalls of sparse gradients, but is understandable given the intractability of exact inference. Furthermore, since such approximations were not proposed by the authors, it seems only fair to also utilize them when comparing against DeepProbLog.
- Section 5.3 is to me, maybe what this entire paper is all about. However, I feel it would need to be expanded considerably.
- There has been a lot of work on Neuro-Symbolic AI, and in my opinion, the related works section does not do it justice.
Typos:
- I believe line 87 should read as "(2) Disjoin the proofs..."
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Could you please explain what you meant by the paragraph on lines 204-208?
- On lines 219-221 you mention that you need a neural network for every functor. I'm guessing this does not scale if you have a large logic program?
- On lines 247-248 you mention that you "do not actually ground the soft-unification function s, but provide it as a built-in". Could please clarify what you mean by that? It is my understanding that when compiling into a circuit, everything is grounded?
- On lines 259-260 you mention that "An important constraint we apply is the soft-unification predicate needs to be ground upon evaluation". Could you clarify what that means? Is that not at odds with the statement in the above question?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I believe the authors have adequately stated the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We want to sincerely thank the reviewer for taking the time to read and review our paper. We are happy to hear that the reviewer found the paper very well written.
> I'm not really sure what the point of evaluating on MNIST-addition is. The authors seems to be using it to argue for the scalability of their approach. However, I don't see how their approach is more scalable compared to DeepProbLog since they incur an extra branching factor owing soft-unification. They mention the use of approximate inference techniques, (...). Furthermore, since such approximations were not proposed by the authors, it seems only fair to also utilize them when comparing against DeepProbLog.
We will try to make this more clear in the paper.
- First, we stress that the MNIST-addition experiment does not use approximate inference techniques but is exact (see line 299). Our solution to MNIST-addition uses a different encoding compared to DeepProbLog, which avoids the combinatorial explosion for higher digits.
- The new, more efficient encoding relies on embeddings. When such an encoding can be used, this "tensorization" is asymptotically faster (comparable to lifted inference vs ground inference). You are right that this could also be done in DeepProbLog if the implementation supports embeddings and neural functors, but this would just give you a very similar system to DeepSoftLog.
- The authors believe this experiment is relevant to the community as it shows that MNIST-addition is solvable exactly in linear time in the number of digits, which has not been demonstrated before. This suggests that the use of embeddings in neuro-symbolic methods can be very powerful. It also suggests that the MNIST-addition experiment (one of the most common neuro-symbolic benchmark) actually rather easy when embeddings can be used.
> Section 5.3 is to me, maybe what this entire paper is all about. However, I feel it would need to be expanded considerably.
We agree that this experiment is the most interesting setting for DeepSoftLog. We have since performed experiments on more grammars and added a neural baseline (You can find the expanded section 5.3 in the pdf attached to the Author Response). On the other hand, these experiments are limited significantly due to the scalability of knowledge compilation and would require approximation techniques to properly scale up, which we leave for further research.
> Could you please explain what you meant by the paragraph on lines 204-208?
In this paragraph, we motivate our concrete choice for the soft-unification function s. Theorem 5 gives us the exponential form for s, but the choice of distance function d is still open. We use the angular distance (i.e. arcos of cosine similarity) as this optimizes well. But as the soft-unification function is the negative exponential of the distance function (theorem 5), this would mean that the minimal soft-unification is achieved when two vectors lie opposite on the sphere. In other words, we can pack very few symbols in our embedding space, before they all start to soft-unify with each other, which is undesirable (creates big branching factors, or requires high embedding dimensions). By taking an absolute value, the minimum soft-unification is achieved by orthogonal vectors, which means we get exponentially more positions with low soft-unification.
> On lines 219-221 you mention that you need a neural network for every functor. I'm guessing this does not scale if you have a large logic program?
If you have a very large number of different functors, this could indeed become a problem. In our experience, this has not been an issue because in practice the more pressing concern for scalability is the probabilistic inference.
> On lines 247-248 you mention that you "do not actually ground the soft-unification function s, but provide it as a built-in". Could please clarify what you mean by that? It is my understanding that when compiling into a circuit, everything is grounded?
You are correct that when compiling a circuit, everything is grounded. What we mean is that we do not explicitly create the full grounding of the soft-unification function, as most of these soft-unifications would not be used. Instead, our implementation provides a built-in that generates these facts as needed.
Compare this with how in ProbLog, a first-order rule also has a large (or infinite) number of groundings, but only the subset that is actually needed for the query is generated (a concept known as the relevant grounding).
> On lines 259-260 you mention that "An important constraint we apply is the soft-unification predicate needs to be ground upon evaluation". Could you clarify what that means?
Consider this DeepSoftLog program:
```
p(~a).
query :- p(~X).
```
In this example, if we query `query`, the variable `X` does not get instantiated before we arrive at the soft-unification (between `~a` and `~X`). This is still well-defined: `X` could be instantiated with every possible element in the domain, but this usually results in an infinite amount of unifications. To avoid this problem, we constrain our inference to cases where this X would be already instantiated. Otherwise, we throw a runtime error.
Note that this is very similar to regular ProbLog, where you can have first-order probabilistic facts, as long as the grounding with respect to the query is finite.
> I believe line 87 should read as "(2) Disjoin the proofs..."
We will correct this mistake, thank you for noticing.
---
Rebuttal Comment 1.1:
Comment: Thank you for you response. I believe your response provides a satisfactory answer to all of my questions. I will raise my score. | Summary: This work proposes DeepSoftLog, a neuro-symbolic framework that generalizes ProbLog by combining soft-unification with probabilistic semantics. It first defines some properties of the soft-unification that it believes to be required for meaningful semantics and efficient training. Further, it examines some of the existing neuro-symbolic systems and shows that they satisfy a few but not all of the defined properties. In contrast, the proposed DeepSoftLog is able to satisfy all the properties. In the empirical evaluations, three experiments are performed where the proposed DeepSoftLog is able to outperform the baselines for the first two.
Strengths: The theoretical analysis of learnable soft-unification is technically sound and is potentially interesting to the other neuro-symbolic systems where unification is present. From the first two experiments, the proposed DeepSoftLog achieves significant improvements and also ablation study on the proposed soft-unification properties is included. The paper has provided sufficient background for the readers to understand this work.
Weaknesses: - My main concern is that the technical details are not clear to me. From Algorithm 1, it is unclear how the score in soft unification is set; how the soft-unification is incorporated during inference and knowledge compilation step is not explained while this is key to understanding the probabilistic semantics of soft-unification.
- Another concern is that the motivation for the soft-unification is not adequately justified. I want to see more about why the defined four properties can improve performance and why soft unification with the defined four properties can outperform the other neuro-symbolic systems. Also, I wonder how these properties affect computational complexity.
- At Line 165, both s(x, y) and s(y, z) are real numbers in [0, 1]. I wonder what a conjunction of two numbers means.
- Both \land-transitive and \times-similarity are not defined, while they are used in Theorem 2&4 respectively.
- In Definition 3, should it be that x \neq z.
- Authors should justify why there's no baseline included in the differentiable finite state machine experiment in Sec. 5.3 to make the empirical evaluation thorough.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the authors provide more motivations on why the four properties of soft-unification can improve performance and some analysis on how they affect complexity?
- Can the authors explain how the soft-unification scores are incorporated during inference?
- I don't understand the example in the introduction: what is the meaning of Listing 1, what are the ~newstate1 & ~newstate4, and why DeepSoftLog would learn to set ~newstate1 = ~state2 & ~newstate4 = ~state1?
- Why there's no baseline for experiment in Sec. 5.3?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first sincerely thank the reviewer for taking the time to read and review our paper.
> I don't understand the example in the introduction: what is the meaning of Listing 1, what are the ~newstate1 & ~newstate4, and why DeepSoftLog would learn to set ~newstate1 = ~state2 & ~newstate4 = ~state1?
First we note that ~new_state1 and ~new_state4 on line 39 should have been ~prev_state1 and ~prev_state2 respectively. We apologize for this typo.
The example in Listing 1 implements a finite state machine in DeepSoftLog. However, the states (e.g. ~state1 or ~prev_state2) are not purely symbolic but are represented with embeddings. Hence DeepSoftLog can learn the transitions of the automata from data. In this example, DeepSoftLog can create a transition from state 1 to state 2 by assigning the same embedding to ~prev_state1 and ~state2. During training, we optimize these embeddings by giving positive and negative examples, and minimizing the cross-entropy loss with gradient descent (see experiment in section 5.3, where the automata is learned jointly with the perception).
> My main concern is that the technical details are not clear to me. From Algorithm 1, it is unclear how the score in soft unification is set; how the soft-unification is incorporated during inference
> Can the authors explain how the soft-unification scores are incorporated during inference?
A main contribution of the paper is that soft-unification can be transformed into probabilistic logic by explicitly encoding the soft-unifications as probabilistic facts. As an example, suppose we have the following DeepSoftLog program:
```
p(~a)
query :- p(~b).
```
Then we can transform it into an equivalent ProbLog program:
```
0.5 :: ~a ≃ ~b.
p(X) :- ~a ≃ X.
query :- p(~b)
```
So after this transformation, which introduces the probabilistic fact ~a ≃ ~b to implement soft unification, we can do regular probabilistic inference. We explain this transformation more formally on lines 235-267. We also prove that this is equivalent to regular soft-unification (proof is in appendix C). Some more elaborate examples of this transformation are included in appendix B.
> knowledge compilation step is not explained while this is key to understanding the probabilistic semantics of soft-unification.
We first want to stress that knowledge compilation (KC) is unrelated to the probabilistic semantics, but is the standard way to do probabilistic inference in probabilistic logics such as Problog. We chose not to introduce KC because (1) understanding it is not really relevant to the content of the paper, besides solving the disjoint sum problem, and (2) KC would require a fairly lengthy introduction. We refer to [3] for more details.
> At Line 165, both s(x, y) and s(y, z) are real numbers in [0, 1]. I wonder what a conjunction of two numbers means.
How the conjunction is evaluated depends on the semantics. So e.g. for probabilistic logic, this conjunction is evaluated as a multiplication. This is explained in lines 166-167.
> Both $\land$-transitive and $\times$-similarity are not defined, while they are used in Theorem 2&4 respectively.
The definitions of $\land$-transitivity and $\times$-similarity can be found on lines 165 and 168 respectively.
> Another concern is that the motivation for the soft-unification is not adequately justified. I want to see more about why the defined four properties can improve performance and why soft unification with the defined four properties can outperform the other neuro-symbolic systems.
> Can the authors provide more motivations on why the four properties of soft-unification can improve performance and some analysis on how they affect complexity?
We summarize the main points of how DeepSoftLog concretely improves the performance and how this relates to the proposed properties:
- DeepSoftLog optimizes better compared to previous systems (e.g. NTP). This is mostly due to satisfying Def4. It has previously been shown [1] that gradient descent gets stuck in local minima if not all proof paths are updated simultaneously (see lines 151-158).
- Def2 also improves the training stability by disallowing redundancy in the proof paths (see also the ablations in figure 2).
- Soft-unification has so far been only used in fuzzy systems, while DeepSoftLog is based on probabilistic semantics. The use of probabilities as opposed to fuzzy values gives DeepSoftLog a performance advantage. Note that this is the only aspect that impacts the computational complexity (which is the same as for ProbLog, see section 7). This trade-off of fuzzy and probabilistic logic between performance and scalability is well-known in the literature.
- Def3 has previously been found to be necessary to make the soft-unification sufficiently expressive [2].
For a more in depth discussion, we refer to section 3.
> Why there's no baseline for experiment in Sec. 5.3?
This is a good point, and we also considered it. However, we are unaware of an existing neuro-symbolic framework that could properly implement this experiment, as it requires both learnable perception and learned structure. We have since implemented an RNN as a simple neural baseline. You can find the results in the pdf attached to the “Author Rebuttal”.
> In Definition 3, should it be that x \neq z.
We will correct this typo, thank you for noticing.
[1]: de Jong, Michiel, and Fei Sha. "Neural theorem provers do not learn rules without exploration." arXiv preprint arXiv:1906.06805 (2019).
[2]: Julián-Iranzo, Pascual, Clemente Rubio-Manzano, and Juan Gallardo-Casero. "Bousi~ Prolog: a Prolog extension language for flexible query answering." Electronic Notes in Theoretical Computer Science 248 (2009): 131-147.
[3]: Fierens, Daan, et al. "Inference and learning in probabilistic logic programs using weighted boolean formulas." Theory and Practice of Logic Programming 15.3 (2015): 358-401. | null | null | Rebuttal 1:
Rebuttal: As was requested by some reviewers, we have expanded section 5.3 with additional experiments and a neural baseline. We have attached the new section 5.3 as a pdf.
Pdf: /pdf/eb05abd1bcb2fab730e4e28f205cfdddb1ee6cc8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner | Accept (poster) | Summary: The paper proposes a new approach to pre-training language models called Prompt-based Continued Pre-training (PCP). PCP combines the idea of instruction tuning with conventional continued pre-training. The authors argue that PCP can improve the performance of prompt-based fine-tuning on a variety of natural language processing tasks.
Strengths: 1. The paper presents a clear comparison of the proposed PCP with the conventional continued pre-training. The evidence on benchmark datasets is strong to support the claim that PCP performs better on these tasks.
2. The paper is well-written and easy to understand. The authors do a good job of explaining the technical details of the proposed approach, and they provide clear and concise summaries of the experimental results.
3. The approach is simple and easy to adopt.
Weaknesses: 1. It is not well understood why conventional continued pretraining (TAPT) is doing so poorly on sentence-pair tasks.
2. Because PCP makes use of templated prompts to align continued pretraining and fine-tuning, with additional pseudo-labels. It is unclear whether the gain is due to prompt alignment or the additional data augmentation from pseudo-labels. Further ablation is necessary.
3. It is unclear how well the proposed approach generalizes to other types of pretraining objectives such as language modeling.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. More ablations and analysis are needed to understand why TAPT is not working for sentence-pair tasks.
2. More experiments to ablate the effect of template alignment vs data augmentation via pseudo-labels.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Generalization of the approach to other pretraining objectives is unclear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer (bM1j)’s thoughtful and thorough evaluation of our paper. We are genuinely appreciative of the positive feedback concerning simplicity and effectiveness of our approach, solid experimental evidence, and the quality of our presentation. We would like to respond to the reviewer's valuable feedback as follows:
***Because PCP makes use of templated prompts to align continued pretraining and fine-tuning, with additional pseudo-labels. It is unclear whether the gain is due to prompt alignment or the additional data augmentation from pseudo-labels. Further ablation is necessary.***
Thank you for your insightful comments and suggestions. To address this question, we conduct an additional ablation study, where we solely utilize pseudo labels or templates in our proposed method PCP. This ablation study is carried out using soft prompt-based fine-tuning. As shown in Table below, the experimental results indicate that using either labels or templates exclusively will hurt the model's performance. This highlights the vital importance of integrating both templates and pseudo labels into our proposed method.
| | SST-2 | SST-5 | MR | CR | MPQA | Subj | TREC | CoLA | Mean |
|----------|-------|-------|----|----|------|------|------|------|------|
|Prompt FT | 92.5 | 48.0 | 86.8 | 90.8 | 81.2 | 90.3 | 83.0 | 4.9 | 72.2 |
|Prompt FT+PCP| 93.9 | 50.7 | 89.8 | 92.0 | 88.3 | 94.9 | 88.6 | 21.5 | 77.5 |
|Prompt FT+PCP(Pseudo Labels Only)| 93.7 |50.8 | 87.7 | 91.3 | 85.1 | 94.3 | 85.7 | -0.7 | 73.5 |
|Prompt FT+PCP(Template Only)| 90.7 | 43.5 | 88.6 | 92.6 | 82.0 | 95.1 | 84.1 | 0.7 | 72.2 |
***It is not well understood why conventional continued pretraining (TAPT) is doing so poorly on sentence-pair tasks?***
We express our gratitude to the reviewer for their constructive suggestion. In response to this query, we have undertaken further experiments. For detailed information, please refer to our general response to all reviewers.
***It is unclear how well the proposed approach generalizes to other types of pretraining objectives such as language modeling.***
We appreciate your insightful comments and questions regarding the generalizability of our proposed approach to other pre-training objectives. We recognize the value of exploring this avenue. We will do our best to get the necessary resources to investigate this important direction, and we leave this to future work.
---
Rebuttal Comment 1.1:
Comment: I appreciate the due-diligence the authors took to address some of the concerns the reviewers including myself raised. I don't have further concerns despite that we still don't fully understand why TAPT performs poorly on sentence pair tasks (the authors did a very thorough ablation given the limited time).
I would change my Rating to 7: Accept.
---
Reply to Comment 1.1.1:
Comment: We are very glad to hear that our response is helpful! May we kindly ask the reviewer to consider updating the score officially, as the organizers have now rectified the issue with updating scores? Thank you again for your understanding and assistance. | Summary:
This work re-examines a well-known technique in the NLP literature, called continued pre-training, that can be utilized to enhance the performance of language models on downstream tasks (in this case, for classification and regression tasks).
The authors have revealed that the conventional approach to continued pre-training is ineffective when applied to sentence-pair tasks and prompt-based fine-tuning settings.
Based on their findings, the authors propose a simple method that involves training on the masking language modeling objective while augmenting the text input with prompts.
This proposed approach has demonstrated its effectiveness across a range of tasks and configurations, suggesting that it can be an effective choice for natural language understanding tasks when using Transformer encoder-based models.
Strengths: - The proposed method is technically simple and effective.
- The work begins with a compelling finding that challenges the effectiveness of the conventional approach, which was previously regarded as effective.
- The authors made efforts to provide analysis from multiple perspectives, aiming to convince readers of the effectiveness of the proposed approach.
Weaknesses: - Although the paper offers a detailed explanation and analysis, there are still ambiguous points that need to be clarified for a better understanding by the readers. Let me ask about this point in the following Questions section.
- There is room for improving the grammar and fluency of the paper's writing.
- I would like to see a thorough analysis of the factors that potentially contribute to the success of the proposed method. For instance, it would be valuable to explore the significance of including labels in continued fine-tuning (although partially considered in Section 4.4) and investigate why sentence-pair tasks do not benefit from the conventional TAPT approach.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
To the best of my understanding, it appears that your method incorporates prompts and their corresponding verbalized labels into the process of continued pre-training. If this is the case, there is a possibility that the tokens for verbalized labels are chosen as the target for masked language modeling. This implies that the method might be performing a similar task during both continued pre-training and fine-tuning. Consequently, there is a potential risk that the effectiveness of the proposed method is primarily attributed to unintentionally prolonged fine-tuning. If this turns out to be true, it could diminish the significance of this work.
To alleviate any ambiguity regarding this point, it would be helpful if you could provide concrete examples demonstrating how your method is applied to real data instances. This would offer a clearer understanding of the methodology and help evaluate its effectiveness more accurately.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The evaluation primarily focuses on the utilization of a single model, specifically RoBERTa-large, for relatively straightforward downstream tasks such as classification. These tasks are considered comparatively easier to solve.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewer (fyTk) for the insightful and comprehensive assessment. It is heartening to receive positive feedback on the simplicity and effectiveness of our approach, as well as the quality of our analysis. Furthermore, we appreciate the acknowledgement of our contribution in identifying the limitations of the previous approach. We would like to respond to the reviewer's valuable feedback as follows:
***I would like to see a thorough analysis of the factors that potentially contribute to the success of the proposed method. For instance, it would be valuable to explore the significance of including labels in continued fine-tuning (although partially considered in Section 4.4)***
We are sincerely thankful for valuable comments and suggestions. As suggested by the reviewer, we carry out an additional experiment which involves the inclusion of labels in the Task-Adaptive Pre-training (TAPT) before performing CLS-based fine-tuning. The results of this experiment are presented in Table below. Our findings suggest that the inclusion of labels in the TAPT phase does not notably improve the model's performance, and there remains a considerable performance gap with our proposed method. We hope that our response has adequately addressed your initial concerns and would be most grateful if the reviewer could consider a score increase accordingly.
| | SST-2 | SST-5 | MR | CR | MPQA | Subj | TREC | CoLA | Mean |
|----------|-------|-------|----|----|------|------|------|------|------|
|CLS FT | 81.2 | 41.7 | 76.3 | 79.5 | 65.1 | 91.7 | 80.3 | 26.7 | 67.1 |
|CLS FT+TAPT| 88.2 | 43.4 | 86.1 | 86.2 | 73.7 | 94.2 | 80.4 | 1.9 | 69.3 |
|CLS FT+TAPT with Labels| 89.2 | 42.3 | 84.2 | 85.7 | 75.8 | 93.8 | 84.5 | 0.7 | 69.5 |
|Prompt FT+PCP| 93.9 | 50.7 | 89.8 | 92.0 | 88.3 | 94.9 | 88.6 | 21.5 | 77.5 |
***investigate why sentence-pair tasks do not benefit from the conventional TAPT approach.***
We express our gratitude to the reviewer for their constructive suggestion. In response to this query, we have undertaken further experiments. For detailed information, please refer to our general response to all reviewers.
***To the best of my understanding, it appears that your method incorporates prompts and their corresponding verbalized labels into the process of continued pre-training. If this is the case, there is a possibility that the tokens for verbalized labels are chosen as the target for masked language modeling. This implies that the method might be performing a similar task during both continued pre-training and fine-tuning. Consequently, there is a potential risk that the effectiveness of the proposed method is primarily attributed to unintentionally prolonged fine-tuning. If this turns out to be true, it could diminish the significance of this work.***
We appreciate the insightful comments and suggestions. To answer the reviewer’s question, we conduct additional experiments. We train cls-based fine-tuning 5 times more steps (5k steps in total) from the TAPT checkpoint. As shown in Table below, our results reveal that prolonged fine-tuning only brings about a marginal improvement of only 0.1% across the eight tasks. Notably, this still falls significantly short of our proposed method (8.1% in absolute). We will include this in the revised version of our paper.
| | SST-2 | SST-5 | MR | CR | MPQA | Subj | TREC | CoLA | Mean |
|----------|-------|-------|----|----|------|------|------|------|------|
|CLS FT(1k steps)+TAPT| 88.2 | 43.4 | 86.1 | 86.2 | 73.7 | 94.2 | 80.4 | 1.9 | 69.3 |
|CLS FT(5k steps)+TAPT| 89.6 | 43.4 | 86.7 | 87.0 | 72.9 | 94.6 | 79.0 | 1.7 | 69.4 |
|Prompt FT(1k steps)+PCP| 93.9 | 50.7 | 89.8 | 92.0 | 88.3 | 94.9 | 88.6 | 21.5 | 77.5 |
***The evaluation primarily focuses on the utilization of a single model, specifically RoBERTa-large, for relatively straightforward downstream tasks such as classification. These tasks are considered comparatively easier to solve.***
We appreciate the insightful comments and suggestions. To facilitate a direct comparison, we conducted experiments utilizing benchmarks that are widely recognized and employed in previous research [1,2]. This approach aligns with previous studies that focus on prompt-based fine-tuning, a benchmarking methodology that is widely accepted within the field.
[1] Tianyu Gao, Adam Fisch, and Danqi Chen. Making Pre-trained Language Models Better Few-shot Learners. ACL 2021.
[2] Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, and Huajun Chen. Differentiable prompt makes pre-trained language models better few-shot learners. ICLR 2022.
***There is room for improving the grammar and fluency of the paper's writing.***
We are thankful to the reviewer for their constructive critique. We will conduct a thorough review and improve the writing in our revised version. | Summary: This paper explores the problem of continued pre-training on task-related text. The authors discovered that conventional continued pre-training methods may not be very effective and can even have a negative impact on fine-tuning performance. To address this, they introduce prompt-based continued pre-training. The approach involves generating pseudo labels on unlabeled data using a fine-tuned model and constructing a prompt-based pre-training corpus by applying templates to the pseudo labeled dataset. The researchers then utilize this corpus, which incorporates prompt information, for continued pre-training and prompt-based fine-tuning. Experimental evaluations conducted on various datasets validate the effectiveness of the proposed approach, as it achieves significant improvements over the baseline methods.
Strengths: * The researchers identify the limitations of conventional continued pre-training on task-related text and propose an innovative solution.
* The experimental results provide strong evidence for the effectiveness of the proposed approach.
Weaknesses: No major weaknesses have been identified in this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is it possible to extend this approach to the task of language modeling fine-tuning?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the effort and time by the reviewer (pe8v). We are heartened by their positive appraisal of our work and their recognition that no major weaknesses have been identified in the paper. We would like to respond to their invaluable feedback as follows:
***Is it possible to extend this approach to the task of language modeling fine-tuning?***
We greatly appreciate the reviewer's thoughtful comments and suggestions. If we understand correctly, the idea of incorporating prompt-based language modeling (prompt-based continued pre-training) as an auxiliary loss during fine-tuning is indeed a compelling one. We will definitely consider this perspective, viewing it as a potential route for exploration in our upcoming research. | Summary: This paper makes a contribution by studying how to adapt pre-trained models to downstream tasks. The authors identify the limitations of TAPTs and show when they do not work well. They then propose PCP, a better algorithm that can adapt a pre-trained model to a target task. PCP is shown to be more effective than TAPTs on a variety of tasks.
Strengths: The paper provides an intriguing analysis of when TAPT style is effective for fine-tuning. TAPT is not very effective on sentence-pair tasks or prompt-based fine-tuning, particularly.
Weaknesses: The paper does a good job of pointing out the weaknesses of TAPT, but it is less clear why PCP works better. The authors do not need to know exactly why it works, but some hypotheses would be helpful. For example, why does PCP-style pretraining work better for prompt-based fine-tuning? Is it because the "pretraining" is more similar to "fine-tuning" in PCP? If so, how can we understand why PCP also works for sentence pair tasks?
The presentation of the paper could also be improved. Figure 2 is not very easy to understand, and the overall flow of the paper is not as clear as it could be.
Overall, the paper could be improved by providing more clarity and explanation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Cloud the authors provide more intuitions or analysis why PCP works better? Based on the intuitions, what kind of additional analysis can be done here?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper only applies on text classifications tasks, ignoring many text generation tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer (TYSB)’s thoughtful and thorough evaluation of our paper. We sincerely appreciate the positive feedback regarding our intriguing analysis and contribution to identify the limitations of the previous approach. We would like to address the reviewer's valuable feedback as follows:
***The paper does a good job of pointing out the weaknesses of TAPT, but it is less clear why PCP works better. The authors do not need to know exactly why it works, but some hypotheses would be helpful. For example, why does PCP-style pretraining work better for prompt-based fine-tuning? Is it because the "pretraining" is more similar to "fine-tuning" in PCP? If so, how can we understand why PCP also works for sentence pair tasks?***
We acknowledge the reviewer's inquiry regarding the intuition of our method. The intuition of our method is that `language model will benefit from seeing the template/prompt before the task, no matter using supervised learning or unsupervised learning objectives`:
- As we mentioned in the paper, the importance of presenting the templates to the language models have been also shown in various instruction tuning works. Our work differs from these works in two main aspects: (1) they aim to improve the zero-shot learning performance, while our work aims to improve the fine-tuning performance; and (2) they learn the template knowledge through supervised learning objectives, while we use unsupervised learning. However, the intuition (why PCP works better) is similar: showing the templates/prompt to the language models can improve the performance on the downstream tasks.
- To explain why PCP works, we attribute the reason that PCP works better than TAPT to *presenting the prompt template to the model* as we mentioned in the paper, because this is the only difference between our method and TAPT. We find that TAPT only tells the model about how the text for the target task looks like (let us call it in-domain knowledge). In contrast, our proposed method PCP tells the model not only the in-domain knowledge but also the prompt information that will be used for fine-tuning on the target task. We also agree with the reviewer that the PCP brings the continued pre-training closer to the prompt-based fine-tuning. We ablate the importance of templates and labels in the PCP in our subsequent response to the reviewer.
We will clarify this in the revised version of our paper.
***Cloud the authors provide more intuitions or analysis why PCP works better? Based on these intuitions, what kind of additional analysis can be done here?***
The main intuition behind our PCP is that it is important to show the model with the template/prompt that will be used in the target task. We perform an ablation study to emphasize the importance of including both the template/prompt and label in PCP. As presented in Table below, the experimental results suggest that relying solely on either labels or templates will hurt the model's performance. This highlights the importance of integrating both templates and pseudo labels into our proposed method PCP.
| | SST-2 | SST-5 | MR | CR | MPQA | Subj | TREC | CoLA | Mean |
|----------|-------|-------|----|----|------|------|------|------|------|
|Prompt FT | 92.5 | 48.0 | 86.8 | 90.8 | 81.2 | 90.3 | 83.0 | 4.9 | 72.2 |
|Prompt FT+PCP| 93.9 | 50.7 | 89.8 | 92.0 | 88.3 | 94.9 | 88.6 | 21.5 | 77.5 |
|Prompt FT+PCP(Pseudo Labels Only)| 93.7 |50.8 | 87.7 | 91.3 | 85.1 | 94.3 | 85.7 | -0.7 | 73.5 |
|Prompt FT+PCP(Template Only)| 90.7 | 43.5 | 88.6 | 92.6 | 82.0 | 95.1 | 84.1 | 0.7 | 72.2 |
***The presentation of the paper could also be improved. Figure 2 is not very easy to understand, and the overall flow of the paper is not as clear as it could be. Overall, the paper could be improved by providing more clarity and explanation.***
We express our gratitude to the reviewer for their insightful feedback on the presentation of our paper. We are committed to improving this aspect in our revised version. The presentation of this paper will be improved as follows:
- We will merge Figure 2 with the aforementioned intuition.
- To further clarify our work with more explanation, all five additional experiments mentioned in our rebuttal will be incorporated into the revised version of our paper. | Rebuttal 1:
Rebuttal: We appreicate all the reviewers for dedicating their time and effort to evaluate our work. We are thrilled to receive positive feedback on **the novelty of our approach** (CwVD,pe8v), **the simplicity and effectiveness of our approach** (CwVD,fyTk,bM1j), **solid experimental evidence** (CwVD,pe8v,bM1j), **intriguing/thorough analysis** (TYSB,fyTk), and **the quality of the presentation** (bM1j). We also thank the reviewer (TYSB,pe8v,fyTk) for acknowledging our **contribution to identify the limitations of the previous approach**.
To answer the reviewers’s questions, we conduct 5 additional experiments, regarding **the potential reason for why TAPT does not work on sentence pair tasks**, **the impact of PCP on the MLM accuracy**, **an ablation study on template only and label only**, **the impact of adding label to TAPT**, **the impact of training CLS-based fine-tuning longer**. We hope that our response, paired with these additional experiments, will address the reviewers' concerns.
One common question is about why TAPT does not work on sentence pair tasks. We delve into this particular issue below. For the remaining concerns, we will respond to each reviewer individually. We have evaluated three possible explanations for TAPT's ineffectiveness on sentence pair tasks: **dataset size**, **sentence pairs with higher similarity than what was observed in pre-training data**, and **lack of separation within sentence pairs**. Our experimental results suggest that the ineffectiveness of TAPT on the sentence pair tasks is not an isolated incident but a recurring issue. Below we discuss each setting in detail.
- **Training data size**. We wonder if the size of training data could be a limiting factor. To test this, we perform TAPT on MNLI, MNLI-mm, SNLI, QNLI, QQP datasets, with up to 360k training data. Our experimental results, detailed in our paper's Appendix, reveal that training TAPT with a large corpus still undermines the performance of cls-based fine-tuning on sentence pair tasks.
- **High similarity within sentence pairs**. We consider that the high similarity between the sentence pairs might conflict with the word distribution that the model has observed during model pre-training. For instance, in the MNLI task, two sentences are `Salt kept the town fed` and `Salt kept the town thriving`. To explore this, we perform TAPT on two different settings, one where we continually pre-train TAPT on randomly paired sentences within the dataset and another where we continually pre-train TAPT using just the first sentence of each pair. As shown in the Table below, the experimental results show that training TAPT with either case leads to even worse performance.
- **Token-based separation of sentence pairs**. In an attempt to mitigate the effect above, we also consider that distinguishing two sentences using distinct tokens might make a difference. To test this, we perform TAPT with two types of separate tokens, the special token from the tokenizer and the template used in PCP (without labels). As shown in the Table below, training TAPT with separate tokens between two sentences can somewhat mitigate the performance drop for cls-based fine-tuning on the sentence pair tasks. However, the results remain inferior compared to cls-based fine-tuning without the use of TAPT.
In conclusion, our investigations highlight the difficulties that TAPT faces on sentence pair tasks, while our proposed method PCP provides a simple yet effective solution. We hypothesize that TAPT's ineffectiveness for CLS-based fine-tuning on sentence pair tasks might be due to various factors, which we leave for a more comprehensive investigation in future work.
| | MNLI |MNLI-mm|SNLI |QNLI | RTE | MRPC | QQP |STS-B | Mean |
|---------- |------- |-------|-----|-----|------|------|------|------|------|
|CLS FT | 46.2 | 48.5 | 45.6| 61.4| 54.2 | 73.2 | 58.5 | 46.0 | 54.2 |
| +TAPT| 36.0 | 36.3 | 45.7| 55.6| 53.4 | 67.7 | 55.0 | 48.1 | 49.7 |
|+TAPT (Tokenizer Sep)| 36.4 | 37.5 | 50.5| 58.8| 50.8 | 63.5 | 59.2 | 48.8 | 50.7 |
|+TAPT (PCP Sep)| 36.3 | 36.7 | 64.6| 58.3| 51.2 | 65.3 | 57.4 | 44.5 | 51.8 |
|+TAPT(random sent pair)| 34.8 | 35.4 | 37.7| 52.2| 51.2 | 64.8 | 56.9 | 23.8 | 44.6 |
|+TAPT(first sent only)| 35.6 | 35.9 | 42.7| 52.2| 52.6 | 62.5 | 53.6 | 16.7 | 44.0 | | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The proposed method (CPC) is built on top of pseudo-labeling and continued pre-training via masked language modeling. This method provides an alternative way to use pseudo-labeled data before fine-tuning the model on downstream tasks. It improves the TAPT method and other semi-supervised methods for text classification.
Strengths: - The novelty of this paper lies in using pseudo-labeled data for masked language modeling. This idea is simple, yet effective. Even though pseudo-labeling and continued MLM training are not new, the way to combine these two is new.
- Extensive experiments on a large number of text classification tasks show the effectiveness of the method.
Weaknesses: - One of the motivations (mentioned in the introduction) is that TAPT does not work well on sentence pair classification. However, it’s still unclear why TAPT does not work well on sentence pair classification while CPC can address this issue.
- Although CPC works empirically better than TAPT, the intuition of CPC is still unclear. Why should we continue pre-training on pseudo-labeled data instead of unlabeled data in TAPT?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What masking strategy is used in Step 2? Are you using the random masking strategy as used in Robert?
2. If you have pseudo labels in CPC, is random masking in Step 2 the optimal strategy? How about some selective masking strategies (e.g., PMI-masking [1])?
3. The paper is missing a self-training baseline, where you only mask the pseudo labels for MLM pre-training in Step 2.
4. It’d be good to include the scores of TAPT in Fig 3, representing “no label + FT”. It will be helpful to compare “wrong label + FT” and “no label + FT”. Because the method is sensitive to the pseudo-labeling performance, it'd be good to add a few-shot setting where the base model is fine-tuned on few-shot examples in Step 1. And compare CPC with TAPT in the few-shot setup.
5. To better compare TAPT (“no label +FT”) with CPC (“pseudo-label +FT”), we should have a better understanding of the impact of the “pseudo-label” in the MLM process. Does the addition of “pseudo-label” in the input sentence improve the MLM prediction accuracy?
[1] PMI-Masking: Principled masking of correlated spans
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the effort and time by the reviewer (CwVD). We are thrilled to receive positive feedback on the novelty and simplicity of our method, along with the extensive experiments that underscores the effectiveness of our method. We would like to address the reviewer's valuable feedback as follows:
***it’s still unclear why TAPT does not work well on sentence pair classification while CPC can address this issue.***
We acknowledge the reviewer's valid question regarding the performance of TAPT on sentence pair tasks. In response, we have performed additional experiments to answer this question. We would invite the reviewer to refer to our general response for details.
***Although CPC works empirically better than TAPT, the intuition of CPC is still unclear. Why should we continue pre-training on pseudo-labeled data instead of unlabeled data in TAPT?***
We appreciate the reviewer's question. As mentioned in our paper, our intuition for PCP is that `language model will benefit from seeing the template/prompt before the target task`, as the importance of presenting the templates to the language models have been shown in various instruction tuning works. We find that TAPT only tells the model about how the text for the target task looks like (let us call it in-domain knowledge). Based on this finding and intuition, we propose PCP, which not only tells the model the in-domain knowledge, but also the prompt information that will be used for fine-tuning on the target task. This also makes the objective of continued pre-training closer to the objective of fine-tuning. We will make it clear in our revised version.
***What masking strategy is used in Step 2? Are you using the random masking strategy as used in Robert?***
Yes, we use the same masking strategy as used in RoBERTa. We dynamically mask 15% tokens. We will make this clear in our revised version.
***If you have pseudo labels in CPC, is random masking in Step 2 the optimal strategy? How about some selective masking strategies (e.g., PMI-masking [1])***
We appreciate the reviewer's suggestion about PMI-masking, which is indeed highly relevant to our work. We will discuss this work in our revised version. We agree with the reviewer that there exists some potential variants, such as using PMI-Masking. However, it's essential to highlight that such a method could be applicable to both TAPT and our proposed method PCP. Our research is primarily centered on contrasting TAPT and PCP, rather than exploring strategies that could be equally beneficial for both. Moreover, assessing these variants across a broad range of datasets imposes a large resource burden on us. Consequently, we plan to leave the evaluation of these variants for future work.
***The paper is missing a self-training baseline, where you only mask the pseudo labels for MLM pre-training in Step 2***
We thank the reviewer for their constructive feedback. To clarify, when we exclusively mask the pseudo labels, our task reverts to prompt-based fine-tuning. We compare our proposed method with 4 state-of-the-art self-training models in Table 2 of our paper, where all these self-training baselines use the prompt-based fine-tuning as the backbone. Results show that PCP outperforms these state-of-the-art self-training models. We will clarify this in our revised version.
***It’d be good to include the scores of TAPT in Fig 3, representing “no label + FT”. It will be helpful to compare “wrong label + FT” and “no label + FT”.***
We thank the reviewer for this valuable suggestion. We will definitely do this.
***Because the method is sensitive to the pseudo-labeling performance, it'd be good to add a few-shot setting where the base model is fine-tuned on few-shot examples in Step 1. And compare CPC with TAPT in the few-shot setup.***
We thank the reviewer for this valuable suggestion. Actually, we exactly follow the few-shot learning setting in the prior work [1], where the only difference is that we use additional unlabelled data. The pseudo-labels for this unlabelled data in PCP are assigned by the model trained in the few-shot learning setting. As shown in Figure 1 of our paper, PCP brings substantial improvement when using a base model trained on few-shot examples to produce pseudo-label. We will clarify this in our revised version.
[1] Tianyu Gao, Adam Fisch, and Danqi Chen. Making Pre-trained Language Models Better Few-shot Learners. ACL 2021.
***To better compare TAPT (“no label +FT”) with CPC (“pseudo-label +FT”), we should have a better understanding of the impact of the “pseudo-label” in the MLM process. Does the addition of “pseudo-label” in the input sentence improve the MLM prediction accuracy?***
We thank the reviewer for the insightful suggestion. In response, we have performed additional experiments, as shown in Table below. Our results indicate that PCP indeed improves the accuracy of MLM. In terms of average accuracy across 8 single-sentence tasks, TAPT attains a score of 0.6713, while PCP obtains a higher average accuracy of 0.7225. We also find that the accuracy of the pseudo label does not appear to significantly influence the MLM accuracy. To be specific, PCP with correct labels yields an average accuracy of 0.7188, while PCP with wrong labels records an average accuracy of 0.7238. Similar results can be observed for sentence pair tasks. The reason for improved accuracy might benefit from the usage of the same template in all sentences, which is easier to predict. We will include this in our revised version. We hope that our response has adequately addressed the reviewer’s concerns and would be most grateful if the reviewer could consider a score increase accordingly.
| | TAPT | PCP | PCP (correct-label) | PCP (wrong-label) |
| ----- | ----- | ---- |--- | ---- |
| Single Sentence Tasks | 0.6713| 0.7225 | 0.7188 | 0.7238 |
| Sentence Pair Tasks | 0.7586 | 0.7700 | 0.7800 | 0.7728 | | null | null | null | null | null | null |
On Certified Generalization in Structured Prediction | Accept (poster) | Summary: This paper proposes a PAC-Bayesian risk bound for the task of structured prediction. Under the assumption that the data is generated by Knothe-Rosenblatt rearrangement, this method distills random output variables into a Wasserstein dependency matrix, which paves the way for improved generalization bounds of generative models.
Strengths: 1. The core method is based on the triangular measure transport and KR rearrangement of a tractable reference measure, which is novel and allows for flexible distillation.
2. The set of bad inputs is bounded so that the risk is more certificated.
3. The presented approach has the potential to compute risk certificates for many downstream discriminative tasks.
Weaknesses: 1. My main concern is that the samples are drawn from a distribution arising from a reference measure through KR rearrangement. Can authors do some toy experiments of some popular generative models to support this assumption?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weaknesses and limitations
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I had a rough check on the derivation and proofs, and they seem correct. My only concern is whether the assumption is valid in practice.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thank you for your careful reading of our manuscript and insightful review
It has been shown by [Bogachev2005] that any atom-free data distribution can be represented as the unique KR rearrangement of an atom-free reference distribution (such as the normal distribution or uniform distribution). This is a rather flexible model. For instance, every data distribution which has a density with respect to the Lebesgue measure satisfies these assumptions. As toy models, this includes all multivariate normal distributions and all Gaussian mixture models.
Due to invertible architectures and measure transport required in both directions, these same assumptions are also at the core of many popular normalizing flow models, such as RealNVP [Dinh], FFJORD [Grathwohl], Invertible ResNet [Behrmann] and their conditional variants [Trippe], [Atanov].
These models have been successfully used as surrogates for real data in multiple applications [Altekruger], [Kousha], [Lugmayr], [Horvat], [Wang] which empirically supports the assumption of an atom-free data distribution.
[Dinh] Dinh, L., Sohl-Dickstein, J., & Bengio, S. (2016). Density estimation using real nvp. arXiv preprint arXiv:1605.08803.
[Grathwohl] Grathwohl, W., Chen, R. T., Bettencourt, J., Sutskever, I., & Duvenaud, D. (2018). Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367.
[Behrmann] Behrmann, J., Grathwohl, W., Chen, R. T., Duvenaud, D., & Jacobsen, J. H. (2019, May). Invertible residual networks. In International conference on machine learning (pp. 573-582). PMLR.
[Trippe] Trippe, B. L., & Turner, R. E. (2018). Conditional density estimation with bayesian normalising flows. arXiv preprint arXiv:1802.04908.
[Atanov] Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., & Vetrov, D. (2019). Semi-conditional normalizing flows for semi-supervised learning. arXiv preprint arXiv:1905.00505.
[Altekruger] Altekrüger, F., Denker, A., Hagemann, P., Hertrich, J., Maass, P., & Steidl, G. (2023). PatchNR: learning from very few images by patch normalizing flow regularization. Inverse Problems, 39(6), 064006.
[Kousha] Kousha, S., Maleky, A., Brown, M. S., & Brubaker, M. A. (2022). Modeling srgb camera noise with normalizing flows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 17463-17471).
[Lugmayr] Lugmayr, A., Danelljan, M., Van Gool, L., & Timofte, R. (2020). Srflow: Learning the super-resolution space with normalizing flow. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16 (pp. 715-732). Springer International Publishing.
[Horvat] Horvat, C., & Pfister, J. P. (2021). Denoising normalizing flow. Advances in Neural Information Processing Systems, 34, 9099-9111.
[Wang] Wang, C., Zhu, Y., & Yuan, C. (2022, October). Diverse Image Inpainting with Normalizing Flow. In European Conference on Computer Vision (pp. 53-69). Cham: Springer Nature Switzerland.
---
Rebuttal Comment 1.1:
Title: Thanks for thre response
Comment: Thanks for the response.
The authors have provided a detailed explanation of KR rearrangement. My concerns have been addressed. I will keep my original score of 6. | Summary: This paper establishes a new PAC-Bayesian risk bound for the structured prediction problem. Technically, it assumes the data are generated by the Knothe-Rosenblatt rearrangement of a factorizing reference measure, and then obtains generalization bounds with the Wasserstein dependency matrix.
Strengths: 1. Different from the existing PAC-Bayesian bound with ϑ-mixing dependency matrix, this paper uses the Wasserstein dependency matrix to measure the interdependence of the data. Built upon the Wasserstein dependency matrix, novel concentration inequalities and general PAC-Bayesian risk bounds are established.
2. The Knothe-Rosenblatt rearrangement adopted in this paper is related to the measured transport and generative models, which may bring some insights into this field.
Weaknesses: 1. The discussion on the existing work [1*] should be clarified. The authors argue that [1*] requires that data are generated by a Markov random field. However, their general PAC-Bayesian risk bounds (Section 5 in [1*]) do not need this assumption. The ϑ-mixing dependency matrix adopted by them exists for any distribution. [1*] only choose Markov random fields as special examples to concretely clarify the implications of their theoretical framework. Thus, PAC-Bayesian risk bounds in this paper may not be more general than that in [1*].
2. There does not exist any theoretical example, simulation experiment, or real-world experiment in this paper. To show the superiority of the proposed framework, the authors should at least consider adding some concrete theoretical examples (e.g. Markov random field) in this paper.
3. The paper is not written very clearly. More descriptions and discussions can be added for the technical notions and definitions. For example, when I read this paper for the first time, some notions (e.g., Equation 7) confuse me, and I can not get intuitions of some definitions (e.g., Wasserstein dependency matrix).
I will consider improving the score after you discuss my concerns.
[1*] Ben London, Bert Huang, and Lise Getoor. Stability and generalization in structured prediction. JMLR 2016.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses comments.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thank you for your careful reading of our manuscript and insightful review
(1)
We agree that, in their most abstract form, the results of [1*] do not require an MRF assumption. However, in our view, the key issue which reduces the relevance of this theory to practitioners is computability. This is the case especially since *tight* generalization bounds for deep learning via PAC-Bayesian theory have been more recently demonstrated [Dziugaite & Roy], [Perez], [Clerico] for finite sets of independent data. For example, [Clerico] certifies CIFAR-10 classification risk of at most 20.66% with probability at least 96.5% over the draw of the sample for a model with empirical test-set error of 19.52 +- 0.2 %. Thus, data dependency appears as the only remaining roadblock in structured prediction.
With the more ambitious goal of tight, finite sample risk certificates in mind, the next step in our view is to instantiate abstract results of learning theory for concrete data models. Here, [1*] focuses on MRF data and all of the more concrete results in [1*] make this assumption. The authors explicitly leave the question of estimating ϑ-mixing coefficients from data for future work [1*, Section 7] and point to the preliminary work of [McDonald] on β-mixing. We are not aware of works which have pursued this for ϑ-mixing. In Section 2.1, we argue that this may not be possible without an explicit assumption on the data-generating process.
This is the starting point of our current work, we choose a specific, very general data model (measure transport by KR rearrangement) and construct an analogous PAC-Bayesian bound from related concentration of measure results which play particularly nicely with this data model.
More specifically, the key technical differences between our mathematical framework and theirs are (a) the choice of coupling measure between conditional distributions and (b) the choice of metric on the data space. They use the coupling measure construction of [Fiebig] and the Hamming distance which allows to distill dependency into a ϑ-mixing dependency matrix [1*, Lemma 6 in Appendix A.2]. We construct the required coupling from properties of the KR rearrangement (Lemma 5 and Lemma 3) and apply the concentration of measure results of [Kontorovich] without choosing a specific metric. Our approach is thus specifically geared towards measure transport models by essentially translating data dependency into a property of the transport map (the Lipschitz constants $L_{ij}$ in Eq. (16), Proposition 6).
(2)
We evaluated eq. (16) of Proposition 6 for a 2D-toy scenario and defined the bad set as suggested in the first paragraph of Section 5, line 297, in order to demonstrate that the theory is amenable to numerical evaluations in principle. Please see the PDF attached for an illustration. As mentioned in the paper, the development of dedicated numerical methods and experiments is beyond the scope of the paper, however.
(3)
Regarding eq. (7), we will add the following explanatory text to line 152:
Here $K^{(i)}(x,dy)$ is a Borel measure for every $x$ and (8) computes the expected value of $f$ at x, conditioned on the fixed realization of the subvector $x^{[i-1]}$.
The subsequent sentence in the paper, preceding Definition 2, clarifies the role of the Wasserstein matrix: "It turns out that the effect of the kernel (7) on local oscillations serves to quantify dependence of data with joint distribution $\mu$."
[1*] London, B., Huang, B., & Getoor, L. (2016). Stability and generalization in structured prediction. The Journal of Machine Learning Research, 17(1), 7808-7859.
[Dziugaite & Roy] Dziugaite, G. K., & Roy, D. M. (2018). Data-dependent PAC-Bayes priors via differential privacy. Advances in neural information processing systems, 31.
[Perez] Pérez-Ortiz, M., Rivasplata, O., Shawe-Taylor, J., & Szepesvári, C. (2021). Tighter risk certificates for neural networks. The Journal of Machine Learning Research, 22(1), 10326-10365.
[Clerico] Clerico, E., Deligiannidis, G., & Doucet, A. (2022, May). Conditionally gaussian pac-bayes. In International Conference on Artificial Intelligence and Statistics (pp. 2311-2329). PMLR.
[McDonald] Mcdonald, D., Shalizi, C., & Schervish, M. (2011, June). Estimating beta-mixing coefficients. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (pp. 516-524). JMLR Workshop and Conference Proceedings.
[Fiebig] Fiebig, D. (1993). Mixing properties of a class of Bernoulli-processes. Transactions of the American Mathematical Society, 338(1), 479-493.
[Kontorovich] Kontorovich, A., & Raginsky, M. (2017). Concentration of measure without independence: a unified approach via the martingale method. In Convexity and Concentration (pp. 183-210). Springer New York.
---
Rebuttal Comment 1.1:
Comment: Thanks very much for the detailed response. I appreciate that you clarify the discussion on the existing work [1*] and add a simulation to support your theory. I have increased the score from 4 to 6.
[1*] Ben London, Bert Huang, and Lise Getoor. Stability and generalization in structured prediction. JMLR 2016. | Summary: This work derives a novel PAC-Baeysian risk bound for structured prediction based on generative models, a triangular and monotone transport and Wasserstein dependency matrices.
Strengths: This is technical a paper with rigorous theoretical analysis. The flow is easy to follow. The authors have made very detailed comparisons to [1], on which this work is largely based. Many examples such as image data are included to help readers understand some of the intuitions behind the derived theoretical results.
[1] London, Ben, Bert Huang, and Lise Getoor. "Stability and generalization in structured prediction." The Journal of Machine Learning Research 17, no. 1 (2016): 7808-7859.
Weaknesses: There are no empirical results to showcase the tightness or usefulness of the risk bounds compared to bounds that ignore the size of the structure object. The assumptions are probably limited. See my comments on these points below.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have a few minor concerns as follows:
1. In line 134, it is assumed that the structured loss is an additive, bounded and pointwise loss. This looks like the conclusion in this paper can only be generalized to affine losses. This assumption does not hold for, e.g., log loss, which goes to infinity, or F1 score, which is non-decomposable into components. So what family of losses can the risk bound generalize to?
2. The successful derivation of the risk bound in Theorem 7 is greatly thanks to, I believe, the introduction of the bad set. I appreciate the detailed explanations on the intuitions for this set. But in practice, how do you define bad and how can you identify those data points? As the authors suggest by themselves, one could use Equation 16 to decide on this but the Lipschitz constants $L_{ij}$ may not be easy to compute.
3. Does the learned measure transport distribution converge to the true underlying distribution asymptotically? If not, how close are they?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have adequately discussed the limitations of their findings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thank you for your careful reading of our manuscript and insightful review
Bounds which ignore the size of the structured object can not make any useful statement on generalization from a single example. For instance, the typical setting of node classification with graph neural networks considers a single graph, with training labels available on a subset of the nodes. In this case, all available data is part of a single structured object (the graph) and PAC-Bayesian learning theory (the only theory which is currently able to achieve tight bounds in deep learning) can not make a useful statement about generalization if the internal structure is ignored (m = 1).
# Response to Reviewer Questions
(1)
The assumptions made on the loss function are common in the PAC-Bayesian literature. Recent works have aimed to relax them with some success. We refer to these works [3,27,28] in Section 1.1, line 56, an even more recent example is [Haddouche]. However, generalization to unbounded losses requires nontrivial extensions to our proofs.
Decomposition of the loss into a sum of component losses is usually assumed in PAC-Bayesian constructions as a prerequisite to underlying concentration of measure results (such as Hoeffding's inequality, which makes a statement about the sum of independent variables). Since we use more general concentration of measure theory which handles dependent data, our results can more easily generalize to losses which do not decompose (but are still bounded).
Note that PAC-Bayesian theory also offers opportunities. For example, the construction does not require differentiable loss. Thus, some loss functions (such as the 01 loss which is particularly natural in classification) can actually be available for PAC-Bayesian risk certification even though they are not typically useful for the training of deep networks. An example is the model of [Clerico] which enables direct optimization of expected generalization error in the sense of 01 loss. Beyond 01 loss, it has been proposed to certify the confusion matrix of classifiers using PAC-Bayesian methods [Adams].
(2)
We evaluated eq. (16) of Proposition 6 for a 2D-toy scenario and defined the bad set as suggested in the first paragraph of Section 5, line 297, in order to demonstrate that the theory is amenable to numerical evaluations in principle. Please see the PDF attached for an illustration. As mentioned in the paper, the development of dedicated numerical methods and experiments is beyond the scope of the paper, however.
(3)
Under the assumptions adopted in the paper, the measure transport can be realized via KR rearrangement, in principle; see, e.g., [6] (mentioned in line 90). Convergence rates for finite sample regimes, however, define an open research problem. A step in this direction was recently made in [4]. This is mentioned in the "Limitations" Section, line 341.
[Haddouche] Haddouche, M., & Guedj, B. (2023). Wasserstein PAC-Bayes Learning: A Bridge Between Generalisation and Optimisation. arXiv preprint arXiv:2304.07048.
[Clerico] Clerico, E., Deligiannidis, G., & Doucet, A. (2022, May). Conditionally gaussian pac-bayes. In International Conference on Artificial Intelligence and Statistics (pp. 2311-2329). PMLR.
[Adams] Adams, R., Shawe-Taylor, J., & Guedj, B. (2022). Controlling confusion via generalisation bounds. arXiv preprint arXiv:2202.05560.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. I have also read the other reviews. I will keep my rating and lean towards acceptance. | Summary: This paper develops a PAC-Bayesian bound on the risk of structured predictors which decreases with the number and size of examples. The work builds on concentration of measure results (e.g., [33]) and continues the line of work in [36] by removing the assumption that data are generated by a Markov Random Field (MRF). Instead, the present work assumes a triangular and monotone transport, a Knothe-Rosenblatt (KR) rearrangement of a reference measure, as the data model.
Strengths: **S1.** This theoretical work advances a series of works on bounding the risk of structured predictors via more recent concentration of measure results.
**S2.** The submission does a great job at providing an overview of prior results and presenting new developments.
Weaknesses: **W1.** The theoretical results in this submission are presented with little connection to implications to practice. For example, L341-344 state that how closely a measure transport distribution learned from data approximates the actual (unknown) distribution of the data is an open question and thus, no empirical results are provided. I wonder if more could be said in this regard.
As another example, in [36] bounds are applied to specific models. On one hand, the specifics of the model and training loss are taken into account. On the other hand, the PAC-Bayes bound is derandomized in order to apply it to a learned predictor. Are similar applications not possible for the results in the present submission?
**W2.** One of the claims is that the present work “makes a preliminary step towards leveraging powerful generative models to establish generalization bounds for discriminative downstream tasks.” In my view, providing further insight into next steps in this direction would help other researchers build upon this work.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: **Q1.** How are the theoretical results in this submission (perhaps given further development) to inform practice?
**Q2.** In general, what would be possible progressions of developments building on this work?
**Q3.** How does the restriction to atom-free data distributions limit relevance and applicability of the results in this submission, e.g., to discrete output spaces?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Potential negative societal impact was not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thank you for your careful reading of our manuscript and insightful review
# W1 & Q2
The recent work of [Baptista] discusses a parametric class of monotone triangular maps which can serve as approximations of KR rearrangement. A possible direction of future work is to specialize this construction to functions which in addition provide a tractable form of the Lipschitz constants in Eq. (16).
The derandomization proposed in [London, Proposition 4] is built on an assumption on hypothesis stability. This is orthogonal to the assumptions on input stability made in our work and thus applies analogously in concert with our PAC-Bayesian construction. We have omitted this step as well as the consideration of bad hypotheses proposed by [London] for ease of exposition (see lines 289-290). We will add reference to this after line 290 "The same applies to the derandomization strategy proposed by [London] which is based on hypothesis stability."
# W2 & Q1
Our work connects state of the art PAC-Bayesian learning theory (the only theory which is currently able to achieve tight bounds in deep learning) to plausible assumptions about concrete generative data models used in practice and realized via KR-based measure transport. In particular, our work addresses the challenging case of structured prediction and the ability to learn from few samples due to the dependency caused by internal structure.
A specific use of this theory is to improve training procedures based on a better understanding of generalization. Because tight risk certificates are now available in PAC-Bayesian learning, it has become possible to directly optimize a (tight) bound on the generalization error (out-of-sample) rather than merely optimizing empirical risk (in-sample) [Clerico] [Perez]. Our work is towards extending this to structured prediction scenarios. As an example, node-classification with graph neural networks can not currently be studied with established PAC-Bayesian methods because, although labels are available on a subset of graph nodes, all data are dependent (m = 1).
# Q3.
Due to invertible architectures and measure transport in both directions, the assumption of atom-free data distributions is required by the theory of measure transport based on KR rearrangements and hence at the core of corresponding popular normalizing flow models used today (such as RealNVP [Dinh], FFJORD [Grathwohl], Invertible ResNet [Behrmann]). This excludes discrete output spaces. On the other hand, due to the well-established theory of quantization of probability distributions, discrete scenarios can be implicitly modeled using atom-free measures, too - see, e.g., the scenario mentioned in the lines 127-128.
[London] London, B., Huang, B., & Getoor, L. (2016). Stability and generalization in structured prediction. The Journal of Machine Learning Research, 17(1), 7808-7859.
[Baptista] Baptista, R., Marzouk, Y., & Zahm, O. (2020). On the representation and learning of monotone triangular transport maps. arXiv preprint arXiv:2009.10303.
[Clerico] Clerico, E., Deligiannidis, G., & Doucet, A. (2022, May). Conditionally gaussian pac-bayes. In International Conference on Artificial Intelligence and Statistics (pp. 2311-2329). PMLR.
[Perez] Pérez-Ortiz, M., Rivasplata, O., Shawe-Taylor, J., & Szepesvári, C. (2021). Tighter risk certificates for neural networks. The Journal of Machine Learning Research, 22(1), 10326-10365.
[Dinh] Dinh, L., Sohl-Dickstein, J., & Bengio, S. (2016). Density estimation using real nvp. arXiv preprint arXiv:1605.08803.
[Grathwohl] Grathwohl, W., Chen, R. T., Bettencourt, J., Sutskever, I., & Duvenaud, D. (2018). Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367.
[Behrmann] Behrmann, J., Grathwohl, W., Chen, R. T., Duvenaud, D., & Jacobsen, J. H. (2019, May). Invertible residual networks. In International conference on machine learning (pp. 573-582). PMLR.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. In my view, even though the contribution is theoretical, commentary that bridges theory to practical implication or future development will benefit the submission -- e.g., the possibility of directly optimizing generalization error.
After reading other reviews and author responses I am keeping my original score. | Rebuttal 1:
Rebuttal:
# We thank all reviewers for their insightful and constructive reviews.
We respond in detail to each reviewer in the corresponding sections. Below, we summarize our responses to points which were raised by at least two reviewers.
# Connections/implications to practice, KR-based generative models and real data distributions.
Our work connects the recent line of research on tight PAC bounds for deep learning to *structured* prediction, based no generative models via KR-arrangements. The latter can cover any atom-free real data distribution in theory and a broad range of realistic scenarios in practice (our detailed response provides more references). Approximation bounds for finite sample scenarios is the subject of ongoing research, see, e.g., [4] for a step towards this goal.
# Discrete spaces
KR-based generative models require measure transport in both directions and assume atom-free reference and target distributions; see, e.g., [11]. We do not consider this as a serious restriction since discrete scenarios can be represented through the quantization of probability distributions.
# Toy experiments, intuition about bad sets
We evaluated a toy experiment to demonstrate that our approach is amenable to numerical evaluation, in principle. Please see the PDF sheet attached. The design of a dedicated numerical algorithm and experiments is beyond the scope of the paper, however.
Pdf: /pdf/3b8f97474904abbbfe8b9f0ab4b843e79703752c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies PAC-Bayesian risk bound for structured prediction using concentration of measure. Specifically, authors characterize stability and dependency with Wasserstein dependency matrix under the data assumption that data are given by Knothe-Rosenblatt (KR) rearrangement of a factorizing reference measure. PAC-Bayesian risk bound for structured prediction has been studied in the previous studies [1], [2] with different stability notions and the main contribution of this paper is to apply measure-theoretic stability notions from [3] to the problem and provide a new PAC Bayesian bound that scales with the number of examples and their dimensions. This result agrees with the PAC risk bound provided in [2] under specific conditions, giving a more generalized interpretation based on Wasserstein dependency matrix.
[1] London, Ben, et al. "Collective stability in structured prediction: Generalization from one example." International Conference on Machine Learning. PMLR, 2013.
[2] London, Ben, Bert Huang, and Lise Getoor. "Stability and generalization in structured prediction." The Journal of Machine Learning Research 17.1 (2016): 7808-7859.
[3] Kontorovich, Aryeh, and Maxim Raginsky. "Concentration of measure without independence: a unified approach via the martingale method." Convexity and Concentration. Springer New York, 2017.
Strengths: 1. New stability notion is considered for PAC bayes risk bound, providing additional analysis and interpretations up on previous works.
2. Provided examples and intuitions are helpful for readers to follow the manuscript.
3. KR rearrangement assumption on data is not that unrealistic, considering that generative models using KR rearrangement assumption have been studies recently, e.g. [4]. This entails the possibility of practical implications.
[4] Irons, Nicholas J., et al. "Triangular flows for generative modeling: Statistical consistency, smoothness classes, and fast rates." International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
Weaknesses: 1. Definitions, notations, some lemma & theorem look pretty similar to ones in [3] --- authors may want to highlight the contribution of this paper by differentiating from [3] or giving more explanation.
2. More background on PAC Bayes risk bounds could be helpful for readers. Especially, it would be nicer if how stability and dependency affects PAC Bayes risk bounds can be discussed.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1) If measure space is discrete, can it be reduced to previously known PAC Bayes bounds? I am curious how measure-theoretic characterization can be connected with the previous results.
2) What's the intuition about "a bad set" --- is it the set possibly locally unstable?
3) Is there any possible simulation setup we can see the provided PAC Bayes bound works?
4) L151: What is $\delta_x$? --- confusion with $\delta_i$. $\delta$ is also overloaded in the probability $1-\delta$, authors may want to change the notation to reduce confusion.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are properly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thank you for your careful reading of our manuscript and insightful review
1.
We do not claim to make a contribution to the mathematical literature on measure concentration with respect to functions of dependent random variables. Rather, we contribute to machine learning by applying the concepts devised in [3] to *structured* prediction based on the class of generative data models (normalizing flows) based on KR rearrangement. Thus our work extends the recent line of research on tight PAC bounds for deep learning to structured prediction, based on a probabilistic data model which covers many practical scenarios.
2.
We refer in Section 1.1, line 43 to works which survey and introduce to PAC-Bayes theory. The influence of stability and dependency on our novel PAC-bound is discussed in the paragraph directly after the main Theorem 7, lines 245-250: bounded dependency and large size d of data points can compensate for smaller numbers m of samples in order to achieve tight bounds, which is important for scenarios of *structured* predicion.
# Response to Reviewer Questions
1.
Since our approach relies on a generative data model (normalizing flows) via KR rearrangement, which comprises an invertible architecture and measure transport in both directions (training and sampling), we assume that both the reference and the target (data) distribution are atom-free, according to the mathematical theory underlying KR-based measure transport. This excludes discrete spaces. On the other hand, due to the well-established theory of quantization of probability distributions, discrete scenarios can be implicitly modeled using atom-free measures, too - see, e.g., the scenario mentioned in the lines 127-128.
The format of our novel PAC bound, eq. (20), clearly connects to prior known PAC bounds. The right-hand side has the ususal format comprising the empirical risk and a complexity term. The essential difference is that the latter term depends also on the norm of the Wasserstein dependency matrix times the oscillation vector, which enables to learn and reliably predict even from few examples (small m) provided the internal dependencies are bounded and the size d of each data point is large.
2.
The intuition about the bad set are rare data points whose internal dependency and structure, which we explore to learn and certify structured prediction from few examples, does *not* conform to the quantitative assessment of this internal dependencies, according to Proposition 6. The bad set contains data points with `unusually' pronounced internal dependencies, causing instability under data perturbations. Thus, the concept of a bad set enables to quantify expected internal data dependencies in a sensible way (Proposition 6) so as to make tight the novel PAC-Bayes risk bound for structured prediction (Theorem 7), for any typical (= good) data point.
3.
We evaluated eq. (16) or Proposition 6 for a 2D-toy scenario and defined the bad set as suggested in the first paragraph of Section 5, line 297, in order to demonstrate that the theory is amenable to numerical evaluations. Please see the PDF attached for an illustration. As mentioned in the paper, the development of dedicated numerical methods and experiments is beyond the scope of the paper.
4.
We agree and are aware that the symbol $\delta$ is overloaded, which is acceptable if both the argument and sub- or superscripts disambiguate the interpretation. In the present case, $\delta_{x}$ with a vector as subscript denotes a Dirac measure in order to make Markov kernels well-defined in connection with conditional probability laws (eq. (7)) and expectation (eq. (8)). On the other hand, $\delta_{i}$ with a number in $[d]$ as subscript and a function as argument (e.g. eqns. (6) and (9)) uniquely refer to the local oscillation of the argument.
---
Rebuttal Comment 1.1:
Comment: I've read the authors' rebuttal and other reviews. The authors clarified main contributions and nicely addressed my questions. Still, as this paper applies the concepts devised in [3], it could have included more descriptions and explanations about them with more context. Thus I would keep my score --- I am inclined to accept based on its contribution, but I am also not strongly against rejecting the paper. | null | null | null | null | null | null |
Focused Transformer: Contrastive Training for Context Scaling | Accept (poster) | Summary: This paper proposes an improved training approach for memory-augmented Transformers on language modeling tasks. In particular, the authors identify the distraction issue in memory-augmented models, whereby the attention mechanism tends to focus on irrelevant contexts in the regime of long sequences. To address this concern, the authors propose cross-batch training, which is inspired by contrastive learning and exposes the attention mechanism to process both relevant and irrelevant documents. Extensive experiments are conducted to validate that the proposed cross-batch training can effectively shape the key-value space, mitigate the distraction issue, and successfully extend the model context size.
Strengths: - The paper is clearly written and well structured.
- The identified distraction issue is compelling and makes a valuable contribution to the long-context research community, highlighting the need for more effective attention mechanisms or retrieving techniques.
- The method is straightforward to use and demonstrates its significant effectiveness in identifying relevant information.
Weaknesses: My main concern is about the increased training computational costs. The computational complexity for the memory-augmented attention layer increases from $O(bn^2)$ to $O(b d n^2)$, where $b$ represents the batch size, $d$ is the number of cross documents, and $n$ is the sequence length. Employing a large $d$ might introduce a significant computation overhead.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. How does the proposed technique compare to conventional packing techniques commonly used during pretraining (e.g., concatenating multiple documents together, possibly interleaved with [EOS] tokens)? This technique is mostly used to reduce padding but also generates a sequence with context from diverse documents.
2. How well does the proposed model converge in comparison to vanilla transformers? Intuitively, as the attention mechanism in FoT is exposed to more irrelevant information, the learning process might slow down.
3. How can one determine the optimal transformer layer for memory augmentation? Does allowing more transformer layers access to the memory aid in the retrieval of relevant information?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a thoughtful review.
**Regarding the computational cost**. We thank you for raising this important concern, which we have added to the limitation section. We also note that two factors mitigate this issue to some extent. As you noted, the increased cost occurs only in the memory layer. Second, FoT exhibits some context extrapolation, which might allow using smaller $n$ in training. Accordingly, we managed to fine-tune OpenLLaMA models using $d \leq 8$
Regarding the questions:
**Q1:** We thank the Reviewer for proposing an interesting baseline. To answer the question, we fine-tune a vanilla OpenLLaMA model on sequences of length $4096$ (original seq. len $2048$), which we consider as just a standard “data packing" baseline, and compare it to a FoT model trained on exactly the same data packed in the same way for 1B tokens. For clarity, we outline the following architectural differences between the baseline and FoT:
* Additional context beyond $2048$ tokens is used in just a subset of layers
* FoT does not use positional encodings in memory layers beyond its original context window ($2048$)
The results are as follows:
|Context/Setup | TREC: baseline $\pm 1.0$ $~~$ | TREC: FoT $\pm 1.0$ $~~$ | WebQS: baseline $\pm 0.1$ $~~$ | WebQS: FoT $\pm 0.1$ $~~$ |
| - | - | - | - | - |
| 2K | 52.8 | 55.6 | 20.7 | 20.8 |
| 4K | 57.2 | 60.9 | 18.7 | 21.0 |
We observe accuracy improvements when more few-shot demonstrations are provided in the extended context (from 2K used by OpenLLaMA to 4K used in our fine-tuning). On TREC, the gains from additional context are significant for both models. Our method presents better data efficiency than the baseline.
**Q2:** Starting with a large $d$ (crossbatch dimension) may slow down the process and result in the memory layer being ignored by the model. We have not seen any such problems when starting with a smaller value of $d\leq8$. See the plot with the training loss comparison in the attached pdf.
**Q3:** Due to the limited resources, we have followed the choice of Memorizing Transofrmer (MT) in picking the memory layer. We have also seen some additional gains from using multiple memory layers in our FoT fine-tuned OpenLLaMA models.
If your concerns have been sufficiently addressed in our responses, we humbly seek your support for the paper. Should you have any further concerns or additional points to raise, we are eager to address them.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses. The clarification resolves most of my concerns, and it is nice to see the improvements due to FoT over naive sequence packing, which validates its benefits. The new experimental results on both short- and long-context tasks further strengthen the claims. My rating has been adjusted upward due to the greatly enhanced clarity.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for supporting our work and, again, for useful suggestions. | Summary: This paper proposes the Focused Transformer (FoT), which modifies one layer of the Transformer model's Attention layer to Memory Attention, thus enabling the model to learn almost infinite length context without being limited by the constraints of local attention. Specifically, the paper also proposes a cross-batch training method that can adapt the model to any length context at a relatively low fine-tuning cost. Experimental results demonstrate the superiority of FoT on Long Context tasks.
Strengths: 1. By only retaining one layer of Global Attention (Memory Attention), both training and inference are efficient;
2. During the inference stage, it can flexibly follow the results of kNN retrieval;
3. The cross-batch training method is very effective, and the training cost is also relatively low;
4. The increase factor of the Context length for LLM reached 2^15!
Weaknesses: 1. With all due respect, I believe that the main text of the paper does not clearly introduce FoT and cross-batch training. I didn't understand it until I read the code in the appendix;
2. The inference stage needs to be combined with kNN, which is inconsistent with the training mode and may affect the upper limit of the model's ability;
3. There is a lack of comparison with other similar schemes for increasing Context length (such as Parallel Context Window).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. I don't quite understand why, during the cross-batch training stage, we need to distinguish between C_prev and C_curr? Isn't it better to treat all tokens across batches as negative learning directly?
2. In the inference stage, how are the K and V to be input into Memory Attention calculated after kNN?
3. After cross-batch training, will the model overfit long Context? Is its performance on short Context still good?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors have clearly and sufficiently explained the limitations of their research
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the encouraging review.
The description of the method outlines the general idea, and we admit that it might be hard to infer details from it. We think presenting the details in the text would be quite cumbersome; thus, we plan to include the shortened versions of the code in the main body of the paper. We hope this will be satisfactory.
**Regarding the kNN**, we consider kNN to be an approximation of the full dense attention. With such a perspective, there is no inconsistency. However, in practice, the approximation errors may impact the performance. We did not observe this in our experimental regime. We leave proper studies of this to future work. We also note that our fine-tuned versions of OpenLLaMA models use full dense attention instead of kNN, which we find performant and efficient enough. We also note that using kNN opens the possibility of using fast approximate indices (e.g., implemented in Faiss), which might be necessary for scaling the method. We have added this to future work.
**Regarding the comparison with Parallel Context Window**, we add the following text to related work.
> Parallel Context Window introduces a method for extending the context of language models without training. They achieve this by embedding several context windows independently in parallel and allowing only a subset of tokens to attend to all windows. On the other hand, we fine-tune existing models and allow all tokens to attend to all previous tokens but only in a subset of layers. Our method also allows us to improve the structure of $(key, value)$ space of existing models.
Regarding the questions:
1. We observed that it is important to have at least one positive example that brings additional related information to memory layers (for example, previous local context window $C_{prev}$). Otherwise, the model may learn to ignore memory layers.
2. For each query in a memory layer, we take $k$ most matching keys from memory and add them to the attention for this query. That is, each query will attend only to all keys that precede it in the local context and its own $k$ most matching keys from memory. In the non-kNN approach, each query attends to the whole memory and all keys that precede it in the local context. To calculate $k$ most matching keys, we use the inner product. Note that in models presented in the paper, we remove positional encodings in memory layers.
3. We have managed to fine-tune OpenLLaMA models so that they maintain the performance of the base models on short-context Language Model Evaluation Harness tasks and show improvements on long-context ones. For details, please refer to the table from the general response.
We again thank you for an encouraging review. Should you have any further questions or concerns, we'd be happy to answer. We kindly ask to support our paper.
---
Rebuttal Comment 1.1:
Comment: Regarding question 2, I still don't quite understand, maybe I didn't express the question clearly, sorry.
What I want to know is, after we have retrieved the topk contexts related to the query in some measure (at this point these contexts should be pure text), how should we encode these contexts so that they can be concatenated into the key sequence of Memory layer.
---
Reply to Comment 1.1.1:
Title: clarification
Comment: The granularity of the retrieval in our work is token-level, like in [1] (not passage/context-level, like, e.g., in [2]). For each query, we retrieve k keys and values that are vectors, not text. These keys and values are integrated via a kNN-attention mechanism similar to [1].
For example, for $k = memorySize$ and a one-layer model, this is equivalent to extending the local context length and using standard attention instead of kNN (kNN with this k 'retrieves the whole memory'). To be more precise, the memory consists of $(key, value)$ pairs generated for each token in the chosen memory layer of the transformer model.
We note that Appendix A.2. details the inference procedure (and differences with Memorizing Transformer [1]) using formulas. We could potentially move (more parts of) it to the main part. Likewise, we’d be happy to apply other suggestions if you think this would clarify the exposition.
We thank you again for pushing this point; we’re determined to make it more clear.
[1] Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy. Memorizing Transformers, 2022 | Summary: This work presents a modified method to train Transformers with memory. The memory is based on k-Nearest Neighbors (with exact match). The work is based on the Memorizing Transformer paper from Y. Wu et al. 2022.
The model assumes a transformer model with a memory attention (an attention layer that has inputs from both current sequence and memory output). The (k-NN) memory returns the top-k matches for a query, and these are included in the input to the memory attention. One difference is the removal of a gating mechanism that mixes between memory and local context, becoming all input to an attention layer.
In addition, contrastive learning is proposed as a method to enhance the model capability to differentiate relevant keys from irrelevant ones in the memory layer. This is achieved by including negative samples from other documents, applying a forward pass up to the memory attention layer, and using these key,value pairs as simulated responses from the k-NN memory. These samples are mixed with the previous context block from the current sample. This method is differentiable and allows to backprop through these positive and negative samples, requiring only a modification to the data pipeline. The authors suggest that the contrastive learning is needed do reduce a "distracting issue" caused in a multi-document setup.
Eventually, the authors show the value of their method with a set of experiments that compare mainly against the memorizing transformer. The results show that memory can extend the effective context length for language modeling at inference time, they can solve a copy task using memory, a pre-trained model can be fine-tuned to introduce the memory attention layer, improves performance in single and multiple document scenarios, and an ablation study with the importance of the negative samples and the differentiable keys.
Strengths: The paper mixes the idea of negative sampling and a twist to the memorizing transformer model with a k-NN memory. The topic is of significance to this community.
* At inference time this model can increase the context length beyond the training length. Inheriting the original memorizing transformer property.
* It is possible to fine-tune from a pre-trained model with no memory to a model with external memory.
* The experiments show the value of the negative samples. The training methodology behind the idea is relatively simple.
Weaknesses: The contribution of this work has limited novelty: both negative sampling (and contrastive learning [2]) and memorizing transformers have been proposed in the past for language modeling.
The clarity and explanations in the work could be improved. The work continuously omits important details or descriptions, sending the reader in most cases to the appendix. The references could make a better exploration of contrastive learning methods applied to language modeling (e.g., [2]). Please, see questions for additional details that may need clarifications.
Retrieval-augmented language models are able to utilize memory to gain information from multiple documents. The "distracting issue" described in the paper suggests that the memory attention mechanism gets distracted by keys from different documents. However, this seems to contradict the findings in previous work (retrieval methods) that earn performance using multiple documents.
The experiments are mainly limited to the memorizing transformer and the vanilla transformer. However, retrieval language models solve the same multi-document problem. Also the dataset chosen follow the memorizing transformers, failing to compare with existing benchmarks in the long context use case, like SCROLLS [1].
[1] Shaham et al., SCROLLS: Standardized CompaRison Over Long Language Sequences, 2022
[2] Jain et al., CONTRACLM: Contrastive Learning For Causal Language Model, 2022
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I would appreciate it if the authors could answer the following questions:
* Line 188: Why is layer 8 picked as the memory layer? How would someone choose the "right" layer for connecting the memory?
* What is the improvement of an existing language model (like OPT, LLaMA, etc.) when fine-tuned to introduce the memory attention layer?
* Why is your model not reaching perfect accuracy in the synthetic task? (Shallow) transformers are able to obtain and copy information. Is the same key related to multiple values in different documents in the dataset?
* What is stored in memory during the evaluation in Section 4.3? What is the data in the memory?
* What is the distance measure for k-NN?
* How is the number of neighbors $k$ chosen for the memory retrieval?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors address limitations of their work in Section 5.
Scaling (approximate) k-NNs and memory has been addressed for quite some time. See the NeurIPS 2021 competition on Billion Scale Approximate Nearest Neighbor Search.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their thoughtful feedback. We acknowledge some deficiencies in the presentation. We focus on the long-context capabilities; the appropriate clarification is described in the general answer. In more detail, we aim for a single-stage method that can incorporate a large number of tokens directly in the model context (kNN attention can be used to approximate full attention). We observed that increasing context length naively gives worse results which is also confirmed e.g., in [1]. This is not in contradiction with the fact that the retrieval methods benefit from additional documents. The difference lies in the fact that they typically use a two-stage approach, with the retrieval part doing the hard job of extracting only a relatively small amount of tokens, which are efficiently processed within the standard context length [2]. We include this clarification in the paper. We also make a number of smaller adjustments to the paper, which hopefully make the paper easier to follow. If the Reviewer sees any specific issue, we'd be happy to address it.
We acknowledge some issues with clarity. In the method description, we outline the general idea and admit that it might be hard to infer details from it. We think presenting the details in the text would be quite cumbersome. To amend the situation, we plan to include a shortened version of the pseudocode from the Appendix in the main body. The pseudocode has been found helpful by Rev #fPf8. Thus, we hope it will satisfactorily complement the description. If you see any other parts which require clarification, please let us know.
Below we address the questions:
1. As noted in the general response, due to the limited computational resources, we could not perform full hyperparameter sweeps. In particular, for the memory layer, we have followed the choice of Memorizing Transformers (MT) [3].
2. Regarding the improvements in existing language models and benchmarking on additional long-context tasks, as noted in the general response, we present 3B and 7B models based on OpenLLaMA along with results on Qasper (SCROLLS benchmark), TREC, and WebQS where we show improved performance when the model is provided with additional context.
3. Regarding the performance on the synthetic task. Please note that the model is trained in a much shorter context than it is evaluated, which makes it out of distribution.
4. Regarding the memory content in Section 4.3 during evaluation - this is a single-doc memory; that is, in the additional context, we only store keys and values belonging to the currently processed document.
5. The distance measure for kNN is inner product.
6. We have tested values of $k\in \{32, 64, 128\}$ and observed small differences in performance. We add this information to Appendix.
Thank you for pointing out [4]; we add the following description to the related work section:
> CONTRACLM [4] applies contrastive losses at both the token and sequence levels during training to promote more uniformly distributed, isotropic representations. It is shown to enhance the discrimination of representations on textual semantic similarity benchmarks. While CONTRACLM focuses on improving the general expressiveness of representations, our work introduces contrastive-inspired techniques designed specifically for training the memory attention mechanism to handle longer context lengths. Nonetheless, exploring other contrastive learning objectives could be beneficial for further improving the memory key structure in future work.
[1] Nelson F. Liu et al., Lost in the Middle: How Language Models Use Long Contexts, 2023
[2] Sebastian Borgeaud et al., Improving language models by retrieving from trillions of tokens, 2021
[3] Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy. Memorizing Transformers, 2022
[4] Jain et al., CONTRACLM: Contrastive Learning For Causal Language Model, 2022
We again thank the Reviewer for raising important issues. We hope that our answers are satisfactory. If not, we’d be happy to provide more details. Otherwise, we’d appreciate it if the Reviewer reconsidered the final score of our submission.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors to take time to address my questions.
Accordingly, I have increased my score to Weak Accept.
Thank you | Summary: The paper proposes a key challenge (distraction issue) when extending the attention layer to external (key, value) pairs, either from previous context of the same document or from other documents. This paper proposes the Focused Transformer (FoT), a model that can utilize a long context by retrieving kNN (key, value) pairs from memory. The architecture of FoT is similar to Memorizing Transformer (how retrieved items are integrated is different), but the training technique is different. This paper proposes to use crossbatch to train the model, which simply includes both previous context of the same document and also contexts from different documents. So the model can learn how to distinguish between useful information and distraction.
Strengths: * The paper focuses on an important and potentially impactful problem that extends the context window size of transformer models.
* The proposed model and the training method is straightforward and easy to implement.
* The experiments are fairly solid and have supported the points that the paper has raised.
* I also like the synthetic dictionary task as an evaluation task to test if a language model has the ability to attend to desirable context and gather necessary information.
Weaknesses: 1. One main weakness of this paper is that it is not clear to me if the proposed model is designed to incorporate long context or to incorporate external memory which may come from a large corpus.
As the paper includes both single-doc and multi-doc experiments, I assume it is the latter case. Based on this assumption, the paper identifies “distraction issue”, which essentially means the model attends to more other documents when considering more documents during inference.
I did not get the point here. If we allow the memory to contain items from other documents, we actually expect the model to extract useful information instead of treating them as “distraction”. When showing the problem of “distraction issue” (Fig 3), we only consider 64 documents, which is far from the real case where we want to use external corpus. Indeed, prior works have shown that using more external information can help the model to achieve better results instead hurting it, e.g., (Khandelwal et al., 2019), (Zhong et al., 2022).
While if the paper is mainly considering only the long-context cases, we also should question if the “distraction issue” exists during inference or not, because we can always control the model to only attend to a single document.
So, I don’t think the paper is well-motivated or well-positioned. The authors are encouraged to clearly state what testing situations they are addressing and re-evaluate the proposed issue in that situation.
2. The paper can be presented better. For example, it is never clear to me what “external memory” exactly means in the paper. Why “previous local context of the given document” is called positive and “contexts from unrelated documents” is called negative, given there are really no distinctions in the training objective (correct me if it is not the case)?
3. The proposed CrossBatch technique is based on including both previous contexts from the same document and contexts from different documents in the same training batch. This method is very similar to the data batching method proposed in (Zhong et al., 2022). The authors are encouraged to discuss the differences.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * L144: why is there a positive and negative? Did you actually distinguish positive/negative in the contrastive loss?
* Table 2: why do you use token-level accuracy here but use perplexity elsewhere.
* How do you define external memory? Anything that is out of the original input can be called external.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have well discussed the limitations of the research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback.
We admit deficiencies in clarity raised by the reviewer. We note that we focus on *the long-context capabilities*; see also the general response. In our experiments, we tested FoT in both single-doc and multi-doc scenarios to assess its potential usefulness. We found that FoT improves perplexity on single, long documents (see Section 4.5), and we believe this makes it applicable to generic long-context language modeling, which is strongly confirmed by our new experiments with large models. At the time of writing, we decided to keep some multi-doc experiments, e.g., to illustrate the distraction issue, which already impairs the model’s perplexity significantly at a relatively small scale (64 documents, see Figures 3,7). However, in retrospect, we recognize this might be confusing. To amend this, we apply the steps described in the general answer; in particular, we indicate the long context focus more explicitly.
We also note that there are practical applications where the multi-document setting is well-motivated, particularly repository-level code generation. We hope that our method could be scaled up to open possibilities for handling the entire repositories of code in context (possibly ~1M tokens in large codebases), which we plan to attempt in future work.
**Regarding the 'external memory'**. By this, we understand anything outside the local context window, i.e., anything that is accessed additionally by memory attention layers. This clarification has been added to the paper. To make this name less confusing, we changed it to 'additional context'.
**Regarding 'positive and negative examples'**.
Our method is inspired by contrastive learning in the way how data is presented to the model. We assume the model is presented with distractions (possibly irrelevant documents) in the training phase. The previous local context (from the current document) is mixed with contexts from other documents in the batch. Intuitively, this 'forces' the model to learn to differentiate between the positives (tokens from the current document, which are likely to be useful) from the negatives (tokens from other documents, which are unlikely to be useful). We note that this is not standard contrastive learning, as we do not have a separate contrastive loss. We only use the standard language modeling loss. We have added a clarification to the paper.
**Regarding Table 2**. We agree that it is not the best way to compare the models, but we were constrained by pre-trained models (as different tokenizers are used, comparing perplexity is not informative). We only aim to show that we get better accuracy with more context available for a given single model, in contrast to comparing token-level accuracies between models with different tokenizers, which is inconclusive. A comment has been added to the caption.
**Regarding [1, 2]**
We first mention that our focus is different. As now clarified, we aim for a long context, while these papers are focused on retrieval from a large knowledge database. We have added a clarification to the related work section. On the technical level, [1] combines two probability distributions to predict the next token: one given by the model logits and the other created from retrieved pairs of (embedding, next token). Meanwhile, we extend the model context in a subset of attention layers, potentially allowing for reasoning within this extended context.
We thank the Reviewer for raising the topic of the usefulness of other documents in the batch. It was observed that nearest neighbors language models (kNN-LM) display almost linear perplexity gains wrt. datastore size[3]. Due to practical limitations, we embed tokens on the order of magnitude of ~100K per training batch, and documents in the batch are randomly sampled from a large corpus, which means it is unlikely that they are related to each other. Therefore, we should not expect significant perplexity gains for kNN-LM in that setting either, as the training bach comprises approximately 0.1% of the datastore. Empirically, we show that extending the model's context length with attention instead of using kNN-LM leads to perplexity increase due to the aforementioned distraction issue. To the best of our knowledge, the distraction (perplexity increase) resulting from increasing attention context length hasn't been studied before.
We agree with the Reviewer that TRIME [2] proposes a very similar objective inspired by contrastive learning, which is already mentioned in the related work section. The main difference is architectural: instead of attending to additional tokens in the memory layer, like in FoT, they combine probability distributions of the dense model and the retrieval database in the final layer, like [1]. Moreover, [2] focuses on retrieval from large databases, whereas our experiments mostly focus on long context. We have included this discussion in the related work section of the updated paper.
**Regarding the distraction issue at the inference time**, giving the model multiple unrelated documents is an extreme case. The distraction issue could possibly occur in single-doc scenarios for long documents consisting of several chapters. Please note that despite alleviating the distraction issue, FoT allows training long-context models using short-context data and improves performance in single-doc cases (see Section 4.5).
[1] Khandelwal et al., Generalization through Memorization: Nearest Neighbor Language Models, 2019
[2] Zhong et al., Training Language Models with Memory Augmentation, 2022
[3] Xu et al., Why do Nearest Neighbor Language Models Work?, 2023
If our responses have adequately addressed your concerns, we kindly request your support and considerating of improving your score. If you have any further concerns or additional points to raise, we are eager to address them. Your feedback is valuable in enhancing the quality and impact of our research.
---
Rebuttal Comment 1.1:
Title: discussion
Comment: We hope that our explanations and changes are clear enough. If there is something else, we would be happy to address it in the remaining time of the rebuttal period.
---
Rebuttal 2:
Comment: I appreciate the authors' response to my review. I am glad that the response has addressed my primary concern (weakness 1) regarding this paper. I have increased my score. | Rebuttal 1:
Rebuttal: We would like to thank you for all your valuable feedback, both positive and negative, which we believe will help us to improve the quality of our work.
We are delighted to note that the reviewers (Sux9, v42b, GWpw) prized the simplicity of our method and noted the potential impact (Sux9) of extending the context length (v42b, GWpw). Reviewer fPf8 noted the efficiency of FoT, and Sux9 prized the synthetic dictionary task.
Reviewers Sux9 and v42b raised important concerns about the scope and contributions of the paper, which are addressed below. Moreover, we would like to advertise new important experiments.
### New experiments with large models
In the period between the submission and the rebuttal, we secured additional compute resources, which let us confirm that our method is useful for much larger models. We believe that this significantly strengthens the paper. Specifically, we fine-tuned $3B$ and $7B$ OpenLLama models with our FoT objective. The resulting models exhibit advancements in tasks requiring long context. Following that, we extend the contribution list accordingly.
Below we shortly summarise the properties of our models. We would be happy to provide more details here if needed. Otherwise, we present them in an additional section in the paper. Specifically, our new models:
1. exhibit long-context capabilities on downstream tasks (see tables below),
2. retain the performance of the original models on short-context tasks,
3. are more compute and memory efficient at inference, compared to vanilla Transformers with the same effective context length,
4. have some context extrapolation capabilities. We illustrate that our models manage a 256k context length for passkey retrieval task from [1] even though being trained on 8k context. **(see pdf)**.
Ad 1. Our model exhibit performance gains from additional in-context few shot examples on TREC question classification [2, 3] and WebQS question answering [4]. What is more, it shows improvements in F1 score on Qasper (Question Answering over Scientific Research Papers) task [5], which is a part of SCROLLS [6].
| Context/Setup | TREC: FoT fine-tuned OpenLLaMA 3B | TREC: FoT fine-tuned OpenLLaMA 7B | WebQS: FoT fine-tuned OpenLLaMA 3B | WebQS: FoT fine-tuned OpenLLaMA 7B |
|---------|---------------------------------|---------------------------------|----------------------------------|----------------------------------|
| 2K | 67.0 | 63.2 | 21.2 | 25.5 |
| 4K | 71.6 | 72.7 | 21.4 | 26.4 |
| 6k | 72.9 | 74.9 | 22.2 | 27.2 |
| 8K | 73.3 | 75.9 | 22.4 | 27.7 |
For Qasper, we used the implementation from Language Model Evaluation Harness and observed that our model 3B model benefits from context increase. Below we provide zero-shot results. Note that LongChat 7B [7] was instruction fine-tuned.
|Context length | OpenLLaMA 3B | FoT fine-tuned OpenLLaMA 3B | LLaMA 7B | LongChat 7B |
| - | - | - | - | - |
| 2K | 18.7 | 18.7 | 18.7 | 19.4 |
| 4K | - | 20.7 | - | 21.2 |
| 6K | - | 23.2 | - | 25.0 |
| 8K | - | 26.6 | - | 28.8 |
Ad 2. Our fine-tuned OpenLLaMA models maintain the performance on the standard suite of short-context tasks from Language Model Evaluation Harness (we use the same collection of tasks as OpenLLaMA and provide the average scores)
|Model | OpenLLaMA 3B | FoT fine-tuned OpenLLaMA 3B | OpenLLaMA 7B | FoT finetuned OpenLLaMA 7B|
| - | -| - | - | - |
|Average score | 0.53| 0.53 | 0.55 | 0.55|
### Scope and contributions of the paper
To clarify, *our paper focuses on the long-context capabilities*. We agree that the current writing is somewhat unclear. We have identified the following issues which might have caused the confusion:
- We now stress that handling large external databases was the initial motivation of FoT, which was later changed to long-context.
- We used the term 'external memory', which we now change to 'additional context'.
- Memorizing Transformer, on which we base our method, is framed as a retrieval method. We now explicitly state in the related work section that despite these similarities, our aim is different. Moreover, we amend the related work to include more long-context papers.
- We include new long context tasks (see above). We keep the multi-doc experiments for illustrative purposes. However, we make explicit that the focus is on the long context.
We thank the reviewers for pinpointing this clarity issue. We hope that the above changes will address the concerns. We would be happy to make further adjustments if the reviewers find it useful.
### Tuning and hyperparamters
Reviewers (V42b, GWpw) raised questions about hyperparameters (e.g. the memory layers used). We note that some of the choices were educated guesses, as due to extreme computational cost, we could not perform a full hyperparameter search. For example, this was the case for the memory layer we based on the findings from Memorizing Transformer. This information is now added as a limitation.
[1] A. Mohtashami, et al. Landmark Attention: Random-Access Infinite Context Length for Transformers.
[2] Li, Xin et al. Learning Question Classifiers.
[3] E. Hovy, et al. Toward semantics-based answer pinpointing.
[4] J. Berant, et al. Semantic Parsing on Freebase from Question-Answer Pairs.
[5] P. Dasigi, et al. A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers.
[6] U. Shaham, et al. SCROLLS: Standardized CompaRison Over Long Language Sequences.
[7] D. Li*, et al. How Long Can Open-Source LLMs Truly Promise on Context Length?
Pdf: /pdf/dd5bd2b6468267a7196c7df2773c32553dd85a88.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Deconvolving Complex Neuronal Networks into Interpretable Task-Specific Connectomes | Reject | Summary: This paper addresses the challenge of identifying elementary functional neuronal networks and their combinations in the context of complex tasks, using task-specific functional MRI (fMRI) data. The central problem it tackles is the deconvolution of task-specific aggregate neuronal networks into elementary networks. These elementary networks can then be used for functional characterization and mapped to underlying physiological regions of the brain. Due to the high-dimensionality, small sample size, acquisition variability, and noise inherent in this task, the authors propose a deconvolution method based on supervised non-negative matrix factorization (SupNMF). The results demonstrate that SupNMF can uncover cognitive "building blocks" of task connectomes that are physiologically interpretable, predict tasks with high accuracy, and outperform other supervised factoring techniques in both prediction accuracy and interpretability. Overall, the proposed framework offers valuable insights into the physiological foundations of brain function and individual performance.
Strengths: 1. The paper presents a valuable effort to implement a supervised decomposition method in a novel way, showing the potential for this approach in a complex context, such as the analysis of neuronal networks.
2. The authors provide fascinating results indicating that each task has unique markers within these learnable networks. This insight could contribute significantly to understanding how tasks are represented and processed within the brain. Observing that some networks are shared across tasks also provides a meaningful direction for future research.
3. The alignment of the findings with existing physiological research is a good sense for future study.
Weaknesses: 1. Reproducibility: The study could be enhanced by applying the proposed method to other datasets or by resampling the existing dataset. This would help to assess the generalizability of the method and the robustness of the findings, which is currently a limitation of the work.
2. Baseline Comparison: It would be beneficial if the authors had compared the proposed SupNMF method with other supervised decomposition methods, such as Partial Least Squares regression. This lack of comparison limits the understanding of how their proposed method stands in relation to existing methodologies regarding performance and effectiveness.
3. Parameter Study: The authors need to provide an in-depth analysis or sensitivity study concerning the weight parameter \lambda. As this parameter likely plays a significant role in balancing different loss terms, this omission constitutes a substantial weakness, potentially leaving readers unclear about the effectiveness of supervision signals.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. In the abstract, line 9 uses the wrong left double quotation mark and right double quotation mark.
2. Provide a reference and explanation about UMAP mentioned in the paper. For the benefit of readers who might not be familiar with this method, I recommend that the authors reference a key source on UMAP and provide a brief explanation of its use and significance in this context.
3. In the related work, there is a missing related paper when talking about the interpretable GNN, "Interpretable Graph Neural Networks for Connectome-Based Brain Disorder Analysis. MICCAI 2022"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed review and helpful comments.
Weakness:
1.Reproducibility: The study could be enhanced by applying the proposed method to other datasets or by resampling the existing dataset. This would help to assess the generalizability of the method and the robustness of the findings, which is currently a limitation of the work.
**A**: We include additional results on a new dataset from the Cambridge Center for Aging and Neuroscience (CamCAN) segmented using a Harvard Oxford Atlas, as suggested by the reviewer. Our results show that the proposed method is consistently highly accurate on this new dataset as well – please see included results in the rebuttal PDF.
Furthermore, we add additional results on the robustness of our method by examining our results across the full range of number of factors from 7 to 20 (7 because we have 7 tasks in HCP and 20 because the dominant factors saturate by rank 20). Our results are remarkably robust across this range of number of factors. We thank the reviewer for the suggestion – the suggested experiments further highlight the power of our method.
Table 1: Test accuracy using different sample size and rank on HCP dataset
|rank = 20| SupNMF|||NMF|||
|-|-|-|-|-|-|-|
|subjects |20|50|100|20|50|100|
|KNN|83.79±2.41|90.97±1.26|88.7±0.84|76.07±9.85|76.29±3.79|83.5±1.85|
|MLP|86.96±5.68|89.36±3.11|88.39±2.85|81.07±5.77|86.43±2.58|88±2.34|
|SVM|88±3.26|89.18±2.37|88.79±2.17|66.43±6.43|84.14±2.51|87±2.8|
| rank = 15 | SupNMF|||NMF|||
|-|-|-|-|-|-|-|
|subjects|20|50|100|20|50|100|
|KNN|81.5±2.78|88.24±0.99|88.79±0.67|56.43±9.29|82.43±4.65|82.43±3.5|
|MLP|80.54±7.27|84.43±4.93|87.29±3.4|78.57±8.45|83.71±4.87|87.14±2.26|
|SVM|88.21±7.67|88.29±3.31|89.5±1.24|73.93±7.33|76.29±6| 86±2.62|
| rank = 10 | SupNMF|||NMF|||
|-|-|-|-|-|-|-|
|subjects |20|50|100|20|50|100|
| KNN | 83..71±1.91 | 85.72±1.44 | 88.54±0.49 | 70.71±4.74 | 75.14±4.05 | 82.64±2.02 |
| MLP | 81.43±7.46 | 88.5±3.92 | 88.14±2.16 | 72.5±8.3 | 83.43±4 | 87.36±2.33 |
| SVM | 84.64±7.67 | 88.5±2.83 | 87.64±2.16 | 71.79±11.46 | 76.43±2.49 | 86.86±1.7 |
As shown in the table, SupNMF demonstrates consistent performance across varying subject numbers and ranks. In contrast, competing baselines such as NMF experience a significant drop in accuracy with fewer subjects.
Accompanying visualizations illustrating task accuracy spanning ranks 7 through 20 and varying subject counts are provided in the author rebuttal PDF. This enhanced analysis will be incorporated into the final version of the paper.
2.Baseline Comparison: It would be beneficial if the authors had compared the proposed SupNMF method with other supervised decomposition methods, such as Partial Least Squares regression. This lack of comparison limits the understanding of how their proposed method stands in relation to existing methodologies regarding performance and effectiveness.
**A**: Partial Least Squares (PLS) regression optimizes the covariance between the predictors and the response. PLS does not attempt to derive interpretable results, which is the primary motivation for our work. The high classification accuracy of our method is an added benefit. For this reason, we compare primarily against state of the art methods capable of delivering interpretable results (the best known current method in the class is SupSVD, which we have used as our primary baseline). In response to the reviewer, we have added another baseline, ICA, which is the method of choice in the neurosciences community. Our results show that the proposed method consistently outperforms ICA in classification accuracy, while yielding physiologically interpretable results (factors).
3.Parameter Study: The authors need to provide an in-depth analysis or sensitivity study concerning the weight parameter \lambda. As this parameter likely plays a significant role in balancing different loss terms, this omission constitutes a substantial weakness, potentially leaving readers unclear about the effectiveness of supervision signals.
**A**: We set regularization parameters relative to the scale of the input data, specifically, lam=100\*np.linalg.norm(X,'fro'), reflecting the Frobenius norm of the matrix Xt. This ensures that the regularization is meaningful in the context of the data's magnitude. Parameter selection could also rely on heuristic techniques or inherent information. However, our default parameter selection works well in all of our experiments. Stated otherwise, our method does not need any parameter tuning, which is a significant benefit.
We present results for a broad range of lambda values in the table below, demonstrating our parameter choice yields optimal outcomes.
|lambda|1|10|10^2|10^3|10^4|100*np.linalg.norm(X_train.T,'fro')|
|-|-|-|-|-|-|-|
|test acc|76.36±4.16|80±3.19|81.79±2.73|87.5±2.29 |88.43±2.41|88.71±2.45|
We now address the specific questions raised by the reviewer in the following.
**Q1**: In the abstract, line 9 uses the wrong left double quotation mark and right double quotation mark.
**A1**: We will fix this in the revised submission.
**Q2**: Provide a reference and explanation about UMAP mentioned in the paper. For the benefit of readers who might not be familiar with this method, I recommend that the authors reference a key source on UMAP and provide a brief explanation of its use and significance in this context.
**A2**: UMAP, akin to t-SNE is a very commonly used visualization tool in the community. Space constraints limited a larger explanation; however, the revised version will duly incorporate relevant references and explanation.
**Q3**:In the related work, there is a missing related paper when talking about the interpretable GNN, "Interpretable Graph Neural Networks for Connectome-Based Brain Disorder Analysis. MICCAI 2022"
**A3**: We appreciate the citation and will add it with the observation that related techniques have been used in the context of brain disorder analysis.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I will increase my rating for this paper | Summary: This paper presents a decomposition method for task-functional connectivity. It proposes canonical task connectomes which derives sub-structure of functional brain connectivity which group connectomes which identify elementary components of the overall connection. The authors use supervised non-negative matrix factorization to factor connectome matrices and show that the derived features are suitable to predict tasks for functional MRI and robust dimension reduction of the original representation.
Strengths: - The paper is clearly written.
- It construct a clear optimization problem whose results are easily interpretable.
- It demonstrates a solid dimension reduction for connectome data.
Weaknesses: - Motivation for including supervision in the connectome decomposition is very weak.
- There are some missing details on variable descriptions, e.g., $d$ in line 96 is missing, $\hat y$ in eq (2) is missing (although can be inferred that it is a prediction for a class), and derivation of $X$ from $C$ should be better explained.
- Experiment is performed only on one study. It can be excused if the dataset is rare, but there are so many publicly available fMRI data.
- Lack of baselines. It is missing the most fundamental baseline, i.e., LDA. Moreover, just typing in "supervised dimension reduction" in google scholar yields various literature but this paper demonstrates only SupSVD as a supervised baseline.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I am not sure what the role of supervised SVD is in this paper. It is written as a separate section in this paper but is used as a baseline in the experiment. I think section 2.3 can be removed and filled by other details of the proposed method.
- Utilization of other datasets? The authors mention several public neuroimaging datasets in the introduction but the proposed framework is validated only on a single benchmark. I believe validating the method on other neuroimaging studies will strengthen the paper.
- It is quite straight forward that task-wise supervision during decomposition will, of course, increase the accuracy of the downstream task prediction. Are there other benefits? What if there is a label set difference between the training and testing set?
- Including supervision in dimension reduction / decomposition has a long history. I think the very very basic baseline should be LDA rather than NMF or SVD as it is a supervised method.
- Perhaps the authors should discuss why SupNMF is outperforming SupSVD in Fig 2.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not discuss any limitation of its own.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for detailed review and helpful comments.
Weaknesses:
The review cites weakness in motivation, clarity in presentation, use of only one study, and need for additional baselines. We have corrected all issues relating to presentation and motivation. Furthermore, we have added a new dataset from the Cambridge Center for Aging and Neurosciences (CamCAN). We have also added an additional performance baseline using ICA, which is the most popular technique in neurosciences. These new results demonstrate that our performance gains generalize across datasets, and that they significantly outperform baselines (ICA and prior baselines of NMF and Supervised SVD).
We now address the specific questions raised by the reviewer in the following.
**Q1**:I am not sure what the role of supervised SVD is in this paper. It is written as a separate section in this paper but is used as a baseline in the experiment. I think section 2.3 can be removed and filled by other details of the proposed method.
**A1**: Supervised SVD presents the state-of-the-art baseline in supervised dimensionality reduction. This is the reason for inclusion of Supervised SVD. In response to the reviewer, we have added another baseline (ICA). If the reviewer still recommends removing Section 2.3 and expanding discussion of our method, experimental protocols, and conclusions, we are happy to do so.
**Q2**:Utilization of other datasets? The authors mention several public neuroimaging datasets in the introduction but the proposed framework is validated only on a single benchmark. I believe validating the method on other neuroimaging studies will strengthen the paper.
**A2**: We have now implemented our method on the CamCAN dataset as well, segmented using the Harvard Oxford Atlas (HOA). Results from our method consistently exhibit consistently high task differentiation accuracy and interpretability.
Table 1: Test accuracy on CamCAN dataset (contain 3 tasks) using HOA atlas
||SupNMF| | |NMF| ||
|-|-|-|-|-|-|-|
|| KNN | MLP| SVM|KNN | MLP| SVM|
|rank=6|73.56±4.73|74.04±5.83|73.35±5.35|71.04±6.92|73.22±6.69|72.82±5.42|
|rank=5|72.77±4.72|73.76±5.45|73.76±5.58|70.96±7.58|73.09±6.96|72.2±5.91|
|rank=4|72.07±4.68|73.03±4.83|75.59±5.38|69.52±8.4|71.22±7.54|70.53±6.23|
|rank=3|71.06±3.83|71.6±4.39|75.43±4.52|66.38±9.74|68.4±7.77|67.45±5.61|
**Q3**:It is quite straight forward that task-wise supervision during decomposition will, of course, increase the accuracy of the downstream task prediction. Are there other benefits? What if there is a label set difference between the training and testing set?
**A3**: Our motivation for integrating supervision is not merely task prediction, but rather to uncover canonical patterns in the functional physiology of the human brain while performing tasks. We accomplish this by deconvolving observed connectomic signals into a set of primitive connectomes that are largely unique to tasks. This is the main contribution of our work. The fact that our methods also yield excellent classification accuracy is an added benefit.
We observe that our method finds patterns that are supported by neuroscience experiments reported in literature. For example, regions in the left prefrontal cortex are associated with word and sentence comprehension [16], which is over-represented in A4, only contributing to Language task. Regions in the left prefrontal cortex are associated with word and sentence comprehension [16], which is over-represented in A4 of Fig 6 corresponding to the language task, as shown in S4 of 5. The dorsal Default Mode Network (dDMN) is known to be active during Rest [5]. The anatomical regions for this functional network in the posterior cingulate cortex (the limbic node), and the angular gyrus found in the posterior part of the inferior parietal lobe, are over-represented in A9 of Fig 6. Additionally, substructures corresponding to the dorsal medial prefrontal cortex are also found in A9. We see that `rest’ connectomes are strongly activated for the corresponding column in the S matrix, as shown in Figure 4. The regions implicated in social processing are the medial prefrontal cortex, which is located in the prefrontal cortex of the frontal lobe [15]. In our results, these nodes are over-represented in A18. Finally, the regions implicated in relational processing are dorsolateral prefrontal cortex, rostrolateral prefrontal cortex, and posterior parietal cortex [23]. These regions are over-represented in A20, and A3 respectively.
In contrast to existing methods, our model offers valuable insights into cognitive tasks, providing both high interpretability and efficiency.
**Q4**:Including supervision in dimension reduction / decomposition has a long history. I think the very very basic baseline should be LDA rather than NMF or SVD as it is a supervised method.
**A4**:While LDA and our model both employ supervision, their objectives differ. LDA seeks linear discriminants to enhance class separation. Although effective for class discrimination, LDA does not yield an interpretable "basis" matrix, a necessary feature of our framework. Our methodology enables the extraction of distinct "building blocks" from the deconvolved data matrix, facilitating subsequent brain region correlation analysis and visualization of associated connectomes. This aids in discerning region-specific functions tied to distinct tasks. A more relevant baseline from the neurosciences community is ICA. We present new results demonstrating the superiority of our method over ICA, which can be found in the author rebuttal PDF.
**Q5**:Perhaps the authors should discuss why SupNMF is outperforming SupSVD in Fig 2.
**A5**: SupSVD's performance is influenced by its orthonormal basis. We observe that this constraint can be limiting, as significant patterns or factors in the data may not necessarily be normal. This explains SupNMF's superior performance in Fig 2. We are happy to include this explanation in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the thorough rebuttal.
My main concern is mainly with baselines with supervision, and ICA does not address this issue. Supervised SVD seems like a quite outdated baseline, and simply searching for supervised non-negative matrix factorization already yields so many literature (not mentioned at all in the related work nor in the introduction) that use supervision or semi-supervision for NMF, so I am not quite convinced where the novelty of the proposed method is coming from. Moreover, including supervised SVD as a separate section is out of scope unless it is a cornerstone of the proposed method.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our submission and providing your insights.
First and foremost, we are more than willing to include and compare our method to any specific baseline that you deem pertinent. It would be immensely helpful if you could specify which particular baseline you'd like to see compared.
We'd also like to remind and emphasize that NeurIPS, has topics of interest that specifically call out neuroscience and cognitive science. Our contributions primarily target the advancement of functional connectomics, which is a significant subfield in neuroscience. This broader perspective might explain why some baselines that seem more mainstream in other domains are not as emphasized in our work. We genuinely believe our research adds value to this niche area. | Summary: The paper presents a new approach to identify task-specific building blocks of neuronal activity from fMRI data by using supervised matrix factorisation. The identified patterns generalise from the train to a test set and match expectations on the brain activity for the different tasks from the neuroscience literature.
Strengths: The paper is well written and addresses an important problem in the analysis of fMRI data in a novel way that achieves impressive performance.
Weaknesses: While the paper criticises that existing methods cannot be applied to large, diverse datasets, the paper lacks a study of the computational efficiency of the proposed approach and a comparison with existing methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How well does the method scale?
- What results do existing methods achieve in the performed experiments? e.g. ICA-based [13, 10, 34] or other ML-based methods [19, 26, 33, 29]
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations of the approach are not openly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed review and helpful comments.
Weaknesses: The review identifies weaknesses in the study of computational efficiency and comparison with existing methods. To address these concerns, we have added experiments on a new larger dataset from the Cambridge Center for Aging and Neurosciences (CamCAN) and also added comparisons with ICA, the standard technique used in the neurosciences community (as also suggested by the reviewer). Our results conclusively show the superiority of our methods, as well as scalability to the larger CamCAN dataset. Detailed tables and figures can be found in the author rebuttal PDF.
We now address the specific questions raised by the reviewer in the following.
**Q1** :How well does the method scale?
In the paper, we only provide a sample size of 100 subjects, which is the largest number of "unrelated" subjects in the HCP datasets (in such experiments in neuroscience, it is important to select unrelated subjects to eliminate potential bias). We have performed additional experiments on the larger dataset from the Cambridge Center for Aging and Neuroscience (CamCAN), which shows that our method scales very well to increasing cohort sizes.
We also present scaling results in terms of the number of factors (ranks) from 7 (selected because we have 7 tasks) to 20 (at which point the factors have converged). We present results on the computational scaling in terms of the number of factors to demonstrate that our method scales very well in terms of the number of factors. In general, we have not observed our computational methods to be the scaling bottleneck – the preprocessing pipeline for denoising, motion correction, and alignment takes much longer than our deconvolution methods.
Table 2: Test accuracy using different sample size and rank on HCP dataset
| rank=20 | SupNMF | | | NMF | | |
|:--------:|:----------:|:----------:|:----------:|:----------:|:----------:|:---------:|
| subjects | 20 | 50 | 100 | 20 | 50 | 100 |
| KNN | 83.79±2.41 | 90.97±1.26 | 88.7±0.84 | 76.07±9.85 | 76.29±3.79 | 83.5±1.85 |
| MLP | 86.96±5.68 | 89.36±3.11 | 88.39±2.85 | 81.07±5.77 | 86.43±2.58 | 88±2.34 |
| SVM | 88±3.26 | 89.18±2.37 | 88.79±2.17 | 66.43±6.43 | 84.14±2.51 | 87±2.8 |
| rank=15 | SupNMF | | | NMF | | |
|----------|------------|------------|------------|------------|------------|------------|
| subjects | 20 | 50 | 100 | 20 | 50 | 100 |
| KNN | 81.5±2.78 | 88.24±0.99 | 88.79±0.67 | 56.43±9.29 | 82.43±4.65 | 82.43±3.5 |
| MLP | 80.54±7.27 | 84.43±4.93 | 87.29±3.4 | 78.57±8.45 | 83.71±4.87 | 87.14±2.26 |
| SVM | 88.21±7.67 | 88.29±3.31 | 89.5±1.24 | 73.93±7.33 | 76.29±6 | 86±2.62 |
| rank=10 | SupNMF | | | NMF | | |
|----------|-------------|------------|------------|-------------|------------|------------|
| subjects | 20 | 50 | 100 | 20 | 50 | 100 |
| KNN | 83..71±1.91 | 85.72±1.44 | 88.54±0.49 | 70.71±4.74 | 75.14±4.05 | 82.64±2.02 |
| MLP | 81.43±7.46 | 88.5±3.92 | 88.14±2.16 | 72.5±8.3 | 83.43±4 | 87.36±2.33 |
| SVM | 84.64±7.67 | 88.5±2.83 | 87.64±2.16 | 71.79±11.46 | 76.43±2.49 | 86.86±1.7 |
| rank = 7 | SupNMF | | | NMF | | |
|----------|------------|------------|------------|------------|------------|------------|
| subjects | 20 | 50 | 100 | 20 | 50 | 100 |
| KNN | 74.33±2.65 | 83.27±2.66 | 85.72±2.07 | 67.14±9 | 78.29±4.77 | 83.29±1.6 |
| MLP | 76.07±6.5 | 86.5±5.61 | 86.32±6.47 | 53.21±6.28 | 82.43±5.31 | 87.71±2.19 |
| SVM | 83.57±7.35 | 86.29±8.01 | 86.79±6.2 | 63.57±7.28 | 80.14±3.53 | 84.93±2.87 |
**Q2**: What results do existing methods achieve in the performed experiments? e.g. ICA-based [13, 10, 34] or other ML-based methods [19, 26, 33, 29]
As noted in the related work section, our goals are quite different from ICA and ML methods. The other methods referenced above (and in our work) primarily focus on task identification accuracy. In contrast, our goal is to identify the functional basis of tasks in terms of explainable, physiologically grounded connectomes. The high classification accuracy is an added benefit of our work.
In either case, for completeness, we also include a comparison of our method with ICA in this rebuttal (and will add these results to the paper). These results show that our method is consistently more accurate than ICA across a range of cohort sizes and number of factors, while at the same time yielding interpretable results! We thank the reviewer for this comment, which motivated us to further demonstrate the power of our method.
| | | ICA | | |
|:-------:|:--------:|:-----------:|:-----------:|:----------:|
| | subjects | 20 | 50 | 100 |
| | KNN | 18.93±3.47 | 18.09±1.47 | 15.3±0.94 |
| rank=20 | MLP | 42.5±10.28 | 43.71±12.59 | 39.21±9.96 |
| | SVM | 24.64±7.04 | 18.43±5.17 | 17.64±2.35 |
| | KNN | 25.17±7.92 | 11.61±1.23 | 17.34±1.94 |
| rank=10 | MLP | 32.86±10.69 | 20.14±8.37 | 34.93±6.44 |
| | SVM | 18.57±8.86 | 18.43±4.06 | 16.43±8.97 | | Summary: This contribution presents a novel method to find a functional basis for a database of task fMRI acquired from different subjects. The functional basis, dubbed canonical task connectomes, is shard across large cohorts; can be composed into task-specific networks; and is predictive of task efficacy.
The authors produce this functional basis through supervised and non-supervised NMF and SVD. For this they propose an objective function (in equation 3) which is compatible with these methodologies.
To implement this approach the authors use the HCP100 database which has 6 cognitive tasks. To show that the obtained basis is task-specific the authors use UMAP plots of different resulting decompositions showing good separability of tasks in the embedded UMAP space. To show that their basis is generalizable across cohorts they use 80/20 splits of the HCP 100 database and use the basis to inform a classifier that predicts the task performed by an unseen subject given the fMRI acquisition. To claim physiological and anatomical grounding for their basis, the authors compare qualitatively the basis against known anatomical traits and the involvement of different basis components as important features for each task.
Strengths: This contribution is a high-quality application of known methods to the significant problem of understanding how functional connectivity in the human brain (as measured by fMRI) is centric to cognitive tasks.
The manuscript presents a well-justified formulation of the problem as a deconvolution case and solves it through different approaches, supervised and non-supervised. This formulation and resolution are original and well presented. Even if the methodological contribution is not at the center of this manuscript, the application of known techniques is well-justified and evaluated.
The evaluation of the results are a good balance of qualitative evaluation (e.g. with UMAP embeddings), and quantitative (e.g. with the clustering approaches) in the case of task-specificity of the connectomes. The generalisation experiment using a downstream classification task is also well conceived. Finally the relation with anatomy and physiology is well organised.
In all, this contribution presents a very good application of known methods to an important problem in neuroimaging. So it's a paper that will have impact in one area, the neuroimaging one.
Weaknesses: I find two weaknesses which are related to claims of cohort generalisability. In short, with the availability of public datasets of fMRI, it's hard to justify a cohort generalisation claim while staying in one 100-subject database. Specifically when the used 100-subject set is a subsample of a 1,200 total database. In light of this, second weakness I find is the lack of an analysis of the stability of the functional basis across datasets, including a study on dataset size.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: My main questions will be database specific.
First, how many subjects are needed to obtain the functional basis, or canonical task connectomes? For this the authors could provide a learning curve-style analysis analysing the dispersion of the found connectomes with respect to the sample size.
Second, to properly claim cross-cohort the authors should properly use a second large cohort, such as UK Biobank or ABCD which, admittedly, have different task fMRI protocols. Short of this, the authors might look into toning done the cross-cohort claim.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have not explicitly mentioned the limitations in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed review and helpful comments.
Weaknesses: The review identifies two weaknesses, lack of generalizability across studies and stability of functional basis across cohorts. We have added a new dataset from the Cambridge Center for Aging and Neurosciences (CamCAN) using a different atlas (Harvard-Oxford Atlas), and show that our results generalize across these rather diverse cohorts and that the factors identified by our deconvolution method are stable across studies. Please see the new results included in the PDF of the author's response.
We now address the specific questions raised by the reviewer.
**Q1**: First, how many subjects are needed to obtain the functional basis, or canonical task connectomes? For this the authors could provide a learning curve-style analysis analyzing the dispersion of the found connectomes with respect to the sample size.
**A1**: In our study, we present results for rank=10 and rank=20 with a sample size of 100. Presented below are additional experimental outcomes for varying ranks and sample sizes:
As shown in the table, SupNMF yields consistently high performance across varied subject numbers and ranks (number of factors). In contrast, NMF experiences a significant drop in accuracy with fewer subjects.
Table 1: Test accuracy using different sample size and rank on HCP dataset
| rank=20 | SupNMF | | | NMF | | |
|:--------:|:----------:|:----------:|:----------:|:----------:|:----------:|:---------:|
| subjects | 20 | 50 | 100 | 20 | 50 | 100 |
| KNN | 83.79±2.41 | 90.97±1.26 | 88.7±0.84 | 76.07±9.85 | 76.29±3.79 | 83.5±1.85 |
| MLP | 86.96±5.68 | 89.36±3.11 | 88.39±2.85 | 81.07±5.77 | 86.43±2.58 | 88±2.34 |
| SVM | 88±3.26 | 89.18±2.37 | 88.79±2.17 | 66.43±6.43 | 84.14±2.51 | 87±2.8 |
| rank=15 | SupNMF | | | NMF | | |
|----------|------------|------------|------------|------------|------------|------------|
| subjects | 20 | 50 | 100 | 20 | 50 | 100 |
| KNN | 81.5±2.78 | 88.24±0.99 | 88.79±0.67 | 56.43±9.29 | 82.43±4.65 | 82.43±3.5 |
| MLP | 80.54±7.27 | 84.43±4.93 | 87.29±3.4 | 78.57±8.45 | 83.71±4.87 | 87.14±2.26 |
| SVM | 88.21±7.67 | 88.29±3.31 | 89.5±1.24 | 73.93±7.33 | 76.29±6 | 86±2.62 |
| rank=10 | SupNMF | | | NMF | | |
|----------|-------------|------------|------------|-------------|------------|------------|
| subjects | 20 | 50 | 100 | 20 | 50 | 100 |
| KNN | 83..71±1.91 | 85.72±1.44 | 88.54±0.49 | 70.71±4.74 | 75.14±4.05 | 82.64±2.02 |
| MLP | 81.43±7.46 | 88.5±3.92 | 88.14±2.16 | 72.5±8.3 | 83.43±4 | 87.36±2.33 |
| SVM | 84.64±7.67 | 88.5±2.83 | 87.64±2.16 | 71.79±11.46 | 76.43±2.49 | 86.86±1.7 |
| rank = 7 | SupNMF | | | NMF | | |
|----------|------------|------------|------------|------------|------------|------------|
| subjects | 20 | 50 | 100 | 20 | 50 | 100 |
| KNN | 74.33±2.65 | 83.27±2.66 | 85.72±2.07 | 67.14±9 | 78.29±4.77 | 83.29±1.6 |
| MLP | 76.07±6.5 | 86.5±5.61 | 86.32±6.47 | 53.21±6.28 | 82.43±5.31 | 87.71±2.19 |
| SVM | 83.57±7.35 | 86.29±8.01 | 86.79±6.2 | 63.57±7.28 | 80.14±3.53 | 84.93±2.87 |
Accompanying visualizations illustrating task accuracy spanning ranks 7 through 20 and varying subject counts are provided in the author rebuttal PDF. This enhanced analysis will be incorporated into the final version.
**Q2**: Second, to properly claim cross-cohort the authors should properly use a second large cohort, such as UK Biobank or ABCD which, admittedly, have different task fMRI protocols. Short of this, the authors might look into toning down the cross-cohort claim.
**A2**: Addressing the cross-cohort question, we implemented our methodology on the CamCAN dataset, segmented using the Harvard Oxford Atlas (HOA). The outcomes consistently exhibit reliable task differentiation and interpretability, supporting the generalizability claims in the paper. And you can find the figure showing clear discrimination on CamCAN tasks in the author rebuttal PDF.
Table 2: Test accuracy on CamCAN dataset (contain 3 tasks) using HOA atlas
| |SupNMF | | | NMF | | |
|---------|-------------------|---------------|-------------------|----------------|-----------------|------------------|
| | KNN | MLP | SVM | KNN | MLP | SVM |
|rank=6| 73.56±4.73 | 74.04±5.83 | 73.35±5.35 | 71.04±6.92 | 73.22±6.69 | 72.82±5.42 |
|rank=5| 72.77±4.72 | 73.76±5.45 | 73.76±5.58 | 70.96±7.58 | 73.09±6.96 | 72.2±5.91 |
|rank=4| 72.07±4.68 | 73.03±4.83 | 75.59±5.38 | 69.52±8.4 | 71.22±7.54 | 70.53±6.23 |
|rank=3| 71.06±3.83 | 71.6±4.39 | 75.43±4.52 | 66.38±9.74 | 68.4±7.77 | 67.45±5.61 |
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the responses to my questions. I will now update my score to accept. | Rebuttal 1:
Rebuttal: We thank the reviewers for their positive and constructive feedback.
Now we address some common questions raised by the reviewers in the following two aspects:
1. Motivation and contribution of this work
Discriminating tasks serve as a downstream objective following the identification of physiologically interpretable cognitive "building blocks" within task connectomes. Several inquiries have arisen regarding task accuracy and its comparison to other classification techniques, including CNN/GNN, PLS, and LDA. We aim to clarify our primary motivation and contribution: to discern the fundamental neural patterns enabling diverse complex cognitive tasks. Our methodology identifies patterns corroborated by existing neuroscience research. For instance, regions within the left prefrontal cortex, prominent in A4, are uniquely linked to language tasks, underscoring their association with word and sentence comprehension [16]. Compared with existing methods, our framework provides meaningful insight to uncover the functional basis of various tasks, and in the process, also yields excellent task discrimination.
2. Generalization and scale
To address concerns of generalization and scalability, we evaluated variations in both subject sizes and factor numbers (rank) and expanded our tests to include the CAMCAN dataset and different atlases. Comprehensive tables and figures are available in the attached PDF.
The table indicates that SupNMF consistently maintains robust performance irrespective of changes in subject count or rank (factor numbers). Conversely, NMF's accuracy noticeably declines with a reduced number of subjects. We have also provided visual illustrations showcasing task accuracy across ranks from 7 to 20 factors and different subject sizes. This extended analysis will be integrated into the paper's final edition.
In our study, we employed 100 "unrelated" subjects from the HCP dataset, aligning with the standard practice in the domain which is the maximum count obtainable from the HCP dataset. This selection was made to preclude any bias from physiological resemblances in related subjects.
Pdf: /pdf/0618f82c2070bd0e5852555fcd171cd7755a6fa9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a novel framework for fMRI analysis that aims to deconvolve complex neuronal networks into task-specific elementary networks called "canonical task connectomes." The proposed method utilizes supervised matrix factorization to identify these task-specific networks and demonstrates their interpretability and generalizability. The study showcases experimental results on the Human Connectome Project dataset, highlighting the ability of the framework to capture the natural task-specific structure in neuroimages.
Strengths: - The paper presents a new problem formulation and introduces the SupNMF method, which is a novel approach to identifying task-specific networks. The authors demonstrate the usefulness of the proposed framework in identifying canonical task connectomes that have a strong physiological basis and can be mapped to regions of the brain to identify physiological underpinnings of tasks.
- the authors present the problem formulation and the proposed method in a clear and concise manner.
- The proposed interpretable framework has the potential to advance understanding of complex cognitive processes and to identify biomarkers for predicting tasks.
- The authors also provide a comprehensive discussion of relevant methods and materials.
Weaknesses: - While the authors present comprehensive experimental results, they could provide more details on the performance of the proposed framework in comparison to other state-of-the-art methods. Additionally, the authors could provide more details on the interpretability of the identified canonical task connectomes and how they relate to existing literature in neurosciences.
- While the authors briefly mention the potential applications of the framework in understanding shared and unique functional networks across different pathologies and how task-specific networks can get dysregulated due to the onset and progression of diseases, a more in-depth discussion of these applications and their potential impact on the field would be helpful.
- The authors did not explain much on why the “unrelated set” of subjects in the Human Connectome Project is selected. Also, more datasets are expected to be included to demonstrate the generalizability of the proposed method.
- The authors could provide more details on how they determined the optimal number of latent connectomes and how this choice impacts the results. One potential concern of the proposed framework is that it relies on the assumption that the observed connectome matrix can be represented as a linear combination of a small number of latent matrices. While this assumption may hold for some datasets, it may not be applicable to all fMRI datasets, especially those with high levels of noise or variability. Additionally, the choice of the number of latent connectomes (i.e., the dimensionality of latent space) is critical and may impact the performance of the proposed framework.
Another potential weakness of the proposed framework is that it requires task-label vectors for each connectome in the dataset. While the authors provide details on how they obtained the task-label vectors for the HCP dataset, it may not be feasible to obtain such labels for all fMRI datasets. Additionally, the choice of the task-label vectors may impact the performance of the proposed framework, and the authors could provide more details on how they selected the task-label vectors and how this choice impacts the results.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Could you provide a more detailed comparison of the performance of the proposed framework with more state-of-the-art methods of other types (CNNs, GNNs)? How does the proposed framework outperform or differ from existing approaches in terms of accuracy and generalizability?
- Can you elaborate more on the interpretability of the identified canonical task connectomes? How do these connectomes relate to existing literature in neurosciences? Are there any specific brain regions or networks that are consistently identified across different tasks?
- How generalizable are the findings of this study? Are there any potential biases or confounding factors that could impact the results?
- How does the proposed framework handle potential confounding factors such as motion artifacts or physiological noise? Were any specific preprocessing steps or techniques employed to address these confounds?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: While the proposed framework has the potential to advance our understanding of complex cognitive processes and to identify biomarkers for predicting tasks, it is important to consider the potential ethical implications of this research. For example, the use of fMRI data for predicting cognitive states or identifying biomarkers could raise concerns about privacy, informed consent, and potential misuse of the data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for detailed review and helpful comments.
Weaknesses:
1.Comparison to state of the art methods: We have also added comprehensive experiments and comparisons to ICA, which is the most commonly used method in this domain. Please see the results in author rebuttal PDF. Our superior results firmly establish the benefits of our method compared to the current best/commonly used methods in the area.
2.Interpretability: We have now added a section on interpreting the factors and their mapping to human brain regions and a discussion of how these regions are known to function for corresponding tasks.
3.Linearity assumptions: This is indeed a valid consideration. However, we submit that ours is the first such investigation on deconvolving brain connectomes into interpretable factors in the area and our results are highly promising notwithstanding our linearity assumption. Indeed, we expect follow-on efforts that may investigate non-linear superpositions as well.
4.Unrelated subjects: This is indeed the norm in the area to eliminate any biases arising from physiological similarities among related subjects.
5.Generalizability to other datasets: We have added experiments on a new dataset – the Cambridge Center for Ageing and Neuroscience (CamCAN) along with a new atlas – the Harvard-Oxford Atlas. Our results show excellent generalizability beyond the Human Connectome Project dataset in our original submission.
6.Choice of the number of factors: we now show that our results are robust across choices of factors (please see the results in PDF).
We now address the specific questions raised by the reviewer in the following.
**A1**: Our framework is fundamentally different from CNNs or GNNs, which focus on prediction accuracy. However, these methods cannot identify specific sub-connectomes, associated anatomical parts of the brain, and their contribution to various tasks. The latter is a key motivation for our proposed work. We discuss this in our response to Q2 in detail.
**A2**: This is an important question – one that provides motivation, and is the main contribution of our work. As a concrete example of interpretability, after we construct the “basis matrix” A of size (64620,20) by deconvolving data matrix X, we have the 20 "building blocks" A1-A20 (20 columns in A) for different tasks. The number of rows, 64620, is the dimension that corresponds to the region index, which allows us to map each of the 20 factors back to the brain and obtain a $360 \times 360$ region correlation matrix. We then apply the BioImageSuite tool on these correlation matrices using node definitions from the atlas of Glasser et al. 2016 to visualize each of the 20 connectomes. By examining the connectomes (i.e., each of the 20 factors), we uncover the region functions associated with specific tasks.
Examining the factors (A1-A20), we find that there is a very clean separation for different tasks (in other words, factors are strongly associated with tasks). In Fig. 5, we can see all factors contributing strongly to only 1-2 tasks. This suggests two important facts: (i) the factors correspond to the functional physiology, as it relates to the tasks; and (ii) localization of the functional regions can be used for task discrimination using inexpensive modalities, such as EEGs. These physiological associations are partially confirmed by domain studies – for example, regions in the left prefrontal cortex are associated with word and sentence comprehension [16], which is over-represented in A4, only contributing to Language task. Regions in the left prefrontal cortex are associated with word and sentence comprehension [16], which is over-represented in A4 of Fig 6 corresponding to the language task, as shown in S4 of 5.
For the last question “Are there any specific brain regions or networks that are consistently identified across different tasks?”. Since we examine the problem of task-specific factors, we aim to find regions that discriminate across different tasks and remove the noise and the basal signal in our brains. Indeed, our method can be easily modified to extract these background signals as well.
In summary, compared with existing methods, our framework provides interpretable and discriminating signals that provide the functional basis of various tasks. We attach figures and additional explanations in our PDF rebuttal.
**A3**: Scale and generalizability are indeed important issues. Here, we provide more experimental details on these aspects:
For generalization, we add additional results on the robustness of our method by examining our results across the full range of number of factors from 7 to 20 (7 because we have 7 tasks in HCP and 20 because the dominant factors saturate by rank 20). Our results are remarkably robust across this range of number of factors. Furthermore, with respect to generalizability beyond datasets, we have now added another dataset CamCAN, which confirms the superiority of our methods across two vastly different cohorts, acquisition protocols, MRI platforms, and atlases. Please see all detailed results in PDF.
Our experimental protocols (use of unrelated subjects), unbiased sampling of subjects, and use of different acquisition protocols, atlases, and instrumentation, aim to eliminate other confounders.
**A4**: As expected, fMRI data from HCP has significant noise, including motion artifacts and physiological variability, which is the reason our problem is complex. We use a sophisticated pipeline for denoising, motion correction, and registration to cancel noise, head motion, and instrumentation effects. We then register all of the images to a reference model (MNI) so we can apply an atlas for segmenting the brain to different regions. Our excellent results indicate that these pipelines are able to significantly reduce noise and variability.
---
Rebuttal 2:
Title: Response to Rebuttal
Comment: Thanks the authors for the reply. I believe this work presents some interesting insights on interpretable connection analysis. | null | null | null | null | null | null |
GALOPA: Graph Transport Learning with Optimal Plan Alignment | Accept (poster) | Summary: The paper proposes a novel paradigm for self-supervised graph learning based on optimal plan alignment named GALOPA, which leverages optimal transport theory to align the optimal transport plans for graphs and node representations, resulting in an improvement in the quality of graph representations. Unlike existing methods, GALOPA does not require generating positive/negative sample pairs, simplifying the data requirements. GALOPA further introduces a new loss to enable the sharing of the exact matching information from graphs space to the representation space. The paper experimentally shows that GALOPA achieves state-of-the-art performance and robustness compared to previous methods on various benchmark datasets.
Strengths: 1. The problem studied is an interesting and intuitive one that directly exploits and aligns the matching information common to both input and output spaces. This seems to combine the ideas of both graph contrastive learning and graph auto-encoders, while avoiding the limitations of both: the former needs to distinguish between positive and negative samples, while the latter needs to reconstruct the original signals (e.g. structure or node features), where the comparison takes place in the same space (graph space). The idea seems to be useful and extendable to other types of data (e.g. images or text) for self-supervised representation learning.
2. The paper is well written and organized. I can easily understand the motivation and the proposed method. The illustrations can give readers a deeper understanding.
3. The experimental design is good, and sections 5 and 6. They explore the rationality of the proposed methodology and dissect its validity. The interpretation of the experimental results is convincing. The results in section 8 verify that the algorithm is state of the art.
Weaknesses: 1. The time complexity analysis may be better placed in the text than in the appendix.
2. Some typos: L54 (generate->generating), L155 (node->nodes), L310 (require->requires).
3. The authors should conduct Wilcoxon signed rank test to verify proposed method in Table 1-2.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1、 The authors should provide more detailed information about the experimental setup, such as the size of the graph dataset used.
2、 The authors can provide a reasonable explanation for why using only node attributes yields the best performance in Section 6.
3、 Please review the paper and ensure that all the characters are in black color.
4、 In Figure 5, ‘ACCUTACY’ -> ‘ACCURACY’.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have well discussed the limitations of their model and possible solutions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for describing our work as interesting and demonstrating superior performance and good design for the experiment. We respond to the reviewers’ questions below.
> **Q1. The time complexity analysis may be better placed in the text than in the appendix.**
Thanks for pointing this out. We will add time complexity analysis to the paper. In addition we have analyzed more ways to reduce time complexity in *GQ2* of the $\color{red}\text{global rebuttal}$.
> **Q2. Some typos: L54 (generate->generating), L155 (node->nodes), L310 (require->requires). In Figure 5, ‘ACCUTACY’ -> ‘ACCURACY’.**
Thanks for your valuable feedback on the typos. We will correct these errors and carefully check for any other errors that may exist in the paper.
> **Q3. Conduct Wilcoxon signed rank test to verify proposed method in Table 1-2.**
Upon the request of the reviewer, we performed Wilcoxon signed rank test on GALOPA and baseline on the node classification dataset and the graph classification dataset, respectively. Table VI and VII report the p-values for the Wilcoxon signed-ranks test for GALOPA at 0.05 significance level with node classification baselines and graph classification baselines, respectively. If the p-value is small, it can reject the idea that the difference is a due to chance and conclude that the population has a median distinct from the performance of baseline model.
*Table VI. The p-values for the Wilcoxon signed-ranks test on **node** classification datasets at 0.05 significance level.*
| GALOPA vs. | p-value |
| :--------: | :---: |
| BGRL | 0.015 |
| GCA | 0.007 |
| GRACE | 0.007 |
| MVGRL | 0.007 |
| DGI | 0.007 |
| VGAE | 0.007 |
| GAE | 0.007 |
| Node2Vec | 0.007 |
| DeepWalk | 0.007 |
*Table VII. The p-values for the Wilcoxon signed-ranks test on **graph** classification datasets at 0.05 significance level.*
| GALOPA vs. | p-value |
| :--------: | :---: |
| SimGrace | 0.078 |
| RGCL | 0.078 |
| Java2 | 0.046 |
| AD-GCL | 0.015 |
| GranpCL | 0.078 |
| InfoGraph | 0.078 |
| Graph2Vec | 0.031 |
| Sub2Vec | 0.015 |
| DGK | 0.031 |
| WL | 0.078 |
| GK | 0.015|
As show in the table, GALOPA achieves superior performance against all the baselines.
> **Q4. Provide more detailed information about the experimental setup, such as the size of the graph dataset used.**
Thanking the reviewer q42Z for the suggestions, we add the description of the node classification and graph classification datasets used as follows
*Table VIII. The statistical information of node classification datasets.*
| Dataset | Nodes | Edges | Classes | Feat. |
| :--------- | ----: | -----: | ------: | ----: |
| Cora | 2708 | 10556 | 7 | 1433 |
| CiteSeer | 3327 | 9228 | 6 | 3703 |
| PubMed | 19717 | 88651 | 3 | 500 |
| WikiCS | 11701 | 216123 | 10 | 300 |
| Coauthor-CS | 18333 | 327576 | 15 | 6805 |
| Amz-Comp. | 13752 | 574418 | 10 | 767 |
| Ama-Photo | 7650 | 287326 | 8 | 745 |
*Table IX. The statistical information of graph classification datasets.*
| Dataset | Graphs | Avg. Nodes | Avg. Edges | Classes |
| :--------- | ----: | -----: | ------: | ----: |
| PROTEINS | 1113 | 39.06 | 72.82 | 2 |
| DD | 1178 | 284.32 | 715.66 | 2 |
| MUTAG | 188 | 17.93 | 19.79 | 2 |
| NCI1 | 4110 | 29.87 | 32.30 | 2 |
| COLLAB | 5000 | 74.49 | 2457.78 | 3 |
| IMDB-B | 1000 | 19.77 | 96.53 | 2 |
We'll add it to the paper.
> **Q5. Provide a reasonable explanation for why using only node attributes yields the best performance in Section 6.**
The main reasons why good performance can be achieved by utilizing only the node attributes in Section 6 are as follows: 1) The constraint $\mathcal{L_{(im)strc}}$ aims to correct the encoder so that the output node representations capture the implicit structure information. Thus, it also provides correction information to the encoder when explicit structural information (edges) is missing. 2) This implicit structural information may not only manifest the explicit structure, but in addition provide abundant auxiliary relation. Thus the best performance may be obtained even if only node attributes are used.
> **Q6. Please review the paper and ensure that all the characters are in black color.**
Thanks for pointing this out. We used red markers for some key concepts to make them easier for the reviewers to read, and we'll change those colors back to black afterwards.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. My concerns are well addressed.
---
Reply to Comment 1.1.1:
Title: Glad to hear that your concerns have been addressed.
Comment: We're glad to hear that we have addressed your concerns! Thanks for spending time on our submission, which makes our paper even stronger. **We will include these comparisons and results in the final version.** | Summary: In this submission, the authors proposed a new self-supervised method for graph representation learning.
Unlike existing contrastive learning methods, the authors consider 1) the consistency between the optimal transport plan defined on the graph pairs and that defined on their node embeddings and 2) the consistency between the grounding cost defined on the graph pairs and the cost derived by the node embeddings.
Taking these two consistency terms as the objective function, the authors learn the representation model, achieving encouraging performance in node classification and graph classification tasks.
In addition, detailed analytic experiments are designed, providing valuable insights for graph self-supervised learning.
--- After rebuttal ---
Thanks for the authors' efforts in the rebuttal phase. Although the proposed method needs some modifications when dealing with heterophilic graphs, the methodology itself provides a new perspective of self-supervised learning for GNNs, and the experimental results are convincing. Therefore, my final score is "borderline accept".
Strengths: 1. The paper is easy to follow.
2. The analytic experiments are comprehensive. Some analytic results are interesting and can provide useful insights for the design of graph self-supervised learning methods.
3. The idea of leveraging the OT-based consistency between raw data and latent representation is interesting and differs from existing graph self-supervised learning methods.
Weaknesses: 1. The sigma in the fused GW term and the rho in the objective function are key hyperparameters, which may impact the performance of the proposed method significantly. However, the authors neither show the robustness of the method to the hyperparameters nor discuss the selection mechanism of the hyperparameters.
2. In my opinion, the computational complexity of the proposed method may be very high, which may limit the practical applications of the proposed method.
3. Some implementation details are missed, e.g., the algorithms to compute the fused GW distance between graphs and the OT distance between node embeddings. Additionally, it would be nice if the authors could provide an algorithmic scheme to show the whole self-supervised learning pipeline.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. When applying the proposed method, if the graphs are augmented randomly in different batches/epochs, we have to compute the fused GW distance between the augmented graphs iteratively. If my understanding is correct, the computational complexity of the method will be very high. How to solve/mitigate this problem?
2. For the node classification experiments in Table 1 and the analytic experiments in Figure 4, the authors seem to consider the homophilic graphs only. Could the authors test the proposed method on heterophilic graphs?
3. In Tables 1 and 2, the authors compared the proposed method with other graph contrastive learning methods. Are all the methods use the same backbone GNN? If the methods use different GNNs, how to ensure the fairness of the comparison?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The high computational cost may limit the application of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for describing our work as interesting and for recognizing the valuable insights of our work for graph self-supervised learning. We respond to the reviewer’s concerns **below** and in the **global response above**.
> **Q1. The sensitivity analysis of $\sigma$ and $\rho$, and discuss the selection mechanism.**
For the selection of $\rho$ and $\sigma$, we search the optimal configuration for them from the set {$10^{-3}, 10^{-2}, \ldots, 10^2, 10^3$} and {$0, 0.1, 0.2, \ldots, 0.9, 1$}, which is also described in the Implementation Details section (Line 350) of the paper. At the request of the reviewer, we conducted robustness experiments on these two parameters. The following table shows the average node classification accuracy on the dataset Cora for different values of the parameter $\rho$ (vertical axis) and the parameter $\sigma$ (horizontal axis). For ease of viewing, we have bolded the values corresponding to non-zero $\rho$ and $\sigma$.
*Table V. The sensitivity analysis of GALOPA on data Cora to the hyperparameters $\sigma$ and $\rho$.*
| $\rho$ vs. $\sigma$ | 0 | 0.3 | 0.5 | 0.8 | 1 |
| :--------: | :---: | :--------: | :--------: | :--------: | :--------: |
| **0** | 0.813±0.35 | 0.809±0.26 | 0.801±0.38 | 0.784±0.22 | 0.776±0.30 |
| $10^{-3}$ | 0.816±0.21 | ***0.823±0.18*** | ***0.834±0.31*** | ***0.836±0.37*** | ***0.838±0.30*** |
| $10^{-2}$ | 0.814±0.27 | ***0.828±0.45*** | ***0.831±0.36*** | ***0.840±0.32*** | ***0.838±0.21*** |
| $10^{-1}$ | 0.818±0.34 | ***0.830±0.31*** | ***0.829±0.31*** | ***0.839±0.43*** | ***0.842±0.34*** |
| $10^0$ | 0.823±0.24 | ***0.834±0.26*** | ***0.835±0.15*** | ***0.838±0.20*** | ***0.841±0.27*** |
| $10^1$ | 0.819±0.29 | ***0.838±0.40*** | ***0.834±0.38*** | ***0.833±0.27*** | ***0.832±0.38*** |
| $10^2$ | 0.821±0.18 | ***0.826±0.29*** | ***0.829±0.34*** | ***0.841±0.14*** | ***0.839±0.38*** |
| $10^3$ | 0.820±0.33 | ***0.824±0.36*** | ***0.833±0.26*** | ***0.832±0.28*** | ***0.841±0.31*** |
From the table we can see that when we **remove the implicit structure constraint** $\mathcal{L}_{(im)strc}$ ($\rho=0$), the performance of GALOPA drops dramatically if we do not use the explicit edge structure ($\sigma=1$) at the same time. Whereas, if we use the edge structure ($\sigma<1$) to a greater extent, i.e., the smaller the $\sigma$, the better performance of the algorithm. Additionally, we discuss the case where **only the node attributes are considered without using explicit edge structure** ($\sigma=1$). In this case if we add implicit structure constraints ($\rho \neq 0$) we can get superior performance.
Combining these two cases, it can be concluded that **implicit structural constraint** $\mathcal{L}_{(im)strc}$ **do capture the internal structure of the graph**. Furthermore, we find that the algorithm is robust to the parameters $\rho$ and $\sigma$, which **fluctuate slightly for different values (>0)** of $\rho$ and $\sigma$.
> **Q2. Provide the algorithms to compute the fused GW distance between graphs and the OT distance between node embeddings.**
Reviewer can refer to the *GQ1* in the $\color{red}\text{global rebuttal}$.
> **Q3. Reduce the complexity of computing the FGW between the augmented graphs iteratively?**
Reviewer can refer to the *GQ2* in the $\color{red}\text{global rebuttal}$.
> **Q4. Could the authors test the proposed method on heterophilic graphs?**
Thanks to the reviewers for the suggestions. However, our proposed algorithm mainly focuses on homophilic rather than heterophilic graph. If we want to process heterophilic graph well we may need to make some modifications to the algorithm, which may consume considerable time. Specifically, the implicit structural constraint $\mathcal{L_{(im)strc}}$ of the algorithm contains an assumption on homophilic graphs, i.e., the implicit structure captured by term $\mathcal{L_{(im)strc}}$ has the property that "**similar nodes tend to be neighborly**", which is contrary to the explicit edge connectivity assumption of heterophilic graph, i.e., **neighboring nodes are dissimilar**.
> **Q5. Are all the methods use the same backbone GNN? If the methods use different GNNs, how to ensure the fairness of the comparison?**
Yes, as mentioned by the reviewer, in order to ensure experimental fairness, we try to ensure that each algorithm uses backbone encoders with the same implementation (e.g., the number of layers and dimensions of the hidden layers).
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' reply. However, the rebuttal raises my concerns about the universality of the proposed method --- it seems that the proposed method cannot be applied to heterophilic graphs directly, and its usefulness across various GNN architectures is not investigated yet.
---
Reply to Comment 1.1.1:
Title: Response (1/2) to Reviewer bARL
Comment: Thank you for your valuable perspectives. We acknowledge any confusion that may have arisen from our previous response. Your concerns have been meticulously evaluated, and we aim to furnish a more comprehensive clarification. To better appreciate our work, we have encapsulated 3 crucial aspects, which are further elucidated through empirical evaluations.
> 1) Universality of GALOPA: Our proposed GALOPA framework indeed possesses the flexibility to be employed on both homophilic and heterophilic graphs. To adapt GALOPA for heterophilic graphs, we only need to replace the current backbone encoder with a suitable one tailored for heterophilic graph scenarios.
According to [1], the heterophily restricts the learning ability of existing homophilic GNNs on general graph-structural data, resulting in significant performance degradation on heterophilic graphs. GALOPA is a flexible OT-based self-supervised framework. The choice of a backbone for graph encoding in GALOPA is not rigidly tied to the proposed framework. This flexibility empowers users to select different backbones based on the specific context. For instance, transitioning GALOPA from homophilic to heterophilic graph settings involves a straightforward substitution of the current homophilic encoder with a suitable heterophilic encoder, serving as the backbone for GALOPA.
To demonstrate this, we conducted a new set of experiments on 4 heterophilic graph data, i.e., *Chameleon, Wisconsin, Cornell, and Squirrel*. In these experiments, we compared GALOPA against state-of-the-art homophilic graph methods (BGRL) and heterophilic graph methods (SP-GCL [2]). For both BGRL and GALOPA, we assessed two scenarios by employing both traditional GNN encoders (HoGNN) used in the paper as well as specialized heterophilic GNN encoders (HeGNN) based on the structure proposed in [3]. We also retained SP-GCL's original encoder design as SP-GCL is specifically tailored for heterophilic graphs. The results are presented below.
| Alg. | Wisconsin | Cornell | Squirrel | Chameleon |
| :--------- | :---: | :----: | :-----: | :---: |
| BGRL(HoGNN) | 0.523±0.27 | 0.561±0.34 | 0.462±0.31 | 0.634±0.51 |
| BGRL(HeGNN) | 0.685±0.22 | 0.579±0.29 | 0.468±0.36 | 0.636±0.45 |
| SP-GCL | 0.635±0.18 | 0.586±0.33 | **0.522±0.47** | 0.653±0.36 |
| GALOPA(HoGNN) | 0.627±0.24 | 0.577±0.25 | 0.428±0.39 | 0.598±0.42 |
| GALOPA(HeGNN) | **0.731±0.26** | **0.682±0.31** | 0.473±0.28 | **0.654±0.39** |
The results demonstrate clear performance enhancements in GALOPA when transitioning the backbone from HoGNN to HeGNN across all heterophilic data. Notably, instances like the Wisconsin data exhibit a notable 16.6% enhancement (from 0.627 to 0.731), while Chameleon showcases a 9.4% uplift (from 0.598 to 0.654). Importantly, GALOPA consistently surpasses BGRL in performance when employing the same encoder. Additionally, in comparison to SP-GCL, a leading heterophilic graph solution, GALOPA outperforms it on three out of four datasets. This robust performance reinforces the efficacy of GALOPA on heterophilic graphs.
> 2) Versatile Performance Across Graph Types: GALOPA demonstrates strong performance across both homophilic and heterophilic graph data, utilizing a single adaptable backbone.
We demonstrate that if a graph encoder performs effectively on both homophilic and heterophilic graphs, the same holds true for GALOPA when utilizing this encoder as its backbone. The adaptability of HeGNN in encoding both homophilic and heterophilic graphs is evident [3]. To verify this, we evaluate the performance of GALOPA(HeGNN) on three homophilic graph data.
| Alg. | Cora | CiteSeer | PubMed |
| :--------- | :---: | :----: | :-----: |
| GAE | 0.714±0.41 | 0.658±0.40 | 0.722±0.71 |
| VGAE | 0.773±1.02 | 0.674±0.24 | 0.758±0.62 |
| DGI | 0.823±0.71 | 0.718±0.54 | 0.767±0.31 |
| GMI | 0.823±0.65 | 0.717±0.15 | 0.793±1.04 |
| MVGRL | 0.834±0.68 | 0.732±0.48 | 0.800±0.62 |
| GRACE | 0.819±0.89 | 0.712±0.64 | 0.805±0.36 |
| GCA | 0.823+0.47 | 0.715±0.32 | 0.809±0.28 |
| BGRL | 0.813+0.54 | 0.720±0.63 | 0.805±0.30 |
| GALOPA(HoGNN) | **0.842±0.30** | 0.743±0.18 | **0.845±0.34** |
| GALOPA(HeGNN) | 0.839±0.21 | **0.745±0.34** | 0.836±0.27 |
The findings illustrate that GALOPA(HeGNN) exhibits comparable performance to GALOPA(HoGNN) on homophilic graphs while outperforming baseline methods. This can largely be attributed to its capacity to adeptly utilize the low-pass, high-pass, and identity channels within GNNs, effectively addressing the variations in both homophilic and heterophilic scenarios. These results further affirm GALOPA's capability to achieve strong performance across distinct graph types by utilizing a unified backbone.
[1] Graph neural networks for graphs with heterophily: A survey. arXiv:2202.07082.
[2] Can single-pass contrastive learning work for both homophilic and heterophilic graph?. arXiv:2211.10890.
[3] Revisiting heterophily for graph neural networks. NeurIPS2022.
---
Reply to Comment 1.1.2:
Title: Response (2/2) to Reviewer bARL
Comment: > 3) Stability Across Various GNN Backbones: GALOPA exhibits consistent stability when employing different similar GNNs as its backbone.
In compliance with the reviewer's request, we conducted an examination of the performance impact on GALOPA by employing diverse GNNs, specifically GCN (as used in the original paper) and SGC (with 1- or 2-hops, denoted as SGC-1 and SGC-2) [4] as encoders. The GCN structure employs a 2-layer design, while the SGC structure utilizes a 1-layer configuration by default. The hidden layer dimension for both models is set to 512.
The results obtained from these experiments highlight the stability of GALOPA's performance when different GNNs are employed as its backbone. This consistency across diverse GNNs architectures underscores the robustness and versatility of our proposed approach. Also, we find that the performance of the model enhances with the increased expressive power of the backbone model. For example, GALOPA(SGC-2) outperforms GALOPA(SGC-1).
| Alg. | Cora | CiteSeer | PubMed |
| :--------- | :---: | :----: | :-----: |
| MVGRL | 0.834±0.68 | 0.732±0.48 | 0.800±0.62 |
| BGRL | 0.813+0.54 | 0.720±0.63 | 0.805±0.30 |
| GALOPA(GCN) | **0.842±0.30** | **0.743±0.18** | 0.845±0.34 |
| GALOPA(SGC-2) | 0.831±0.24 | 0.732±0.38 | **0.851±0.41** |
| GALOPA(SGC-1) | 0.822±0.36 | 0.735±0.34 | 0.840±0.18 |
[4] Simplifying graph convolutional networks. ICML2019.
**Thank you for your valuable feedback. It will greatly aid in improving the quality of our manuscript. We will incorporate these comparisons and results into the final version.** | Summary: This paper proposes a new paradigm for self-supervised graph learning GALOPA based on optimal transport. It seeks to align optimal transport plans from graph space to node representation space instead of distance alignment in graph contrastive learning. The extensive experiments show that the optimal transport plan is more informative and overcomes the label-invariant assumption.
Strengths: - The problem to overcome the label-invariant assumption in augmentation-based graph self-supervised learning is interesting and significant. The idea to align the transport plan between the graph space and representation space is novel and interesting.
- The paper is well-organized and easy to follow. The introduction of related knowledge is very clear and helpful for understanding.
- The experiment can validate its conclusion.
Weaknesses: - The connection and difference between the two parts of Loss is not clear
- The experiment is insufficient. It is better to add an ablation study on the two proposed losses to observe which part contributes much.
- This paper just uses the optimal transport method for self-supervised graph learning, which cannot be considered as a great innovation.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - The authors can provide the algorithms concerning how to compute the transport plan (e.g. Sinkhorn-Knopp.)
- The authors use gradient descent to optimize the Eq(12) instead of Sinkhorn-Knopp. I wonder whether the accuracy of those optimized methods will affect the results.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for describing our work as interesting and significant. We respond to the reviewers’ questions below.
> **Q1. The connection and difference between the two parts of Loss is not clear.**
Thanks to the reviewer's suggestion, we describe the relationship between these two losses in more detail below. These two losses come from calibration of two aspects of the encoder: STRUCTURAL INFORMATION and MATCHING INFORMATION, both of which are expected to be captured in the output node representation of the encoder. $\mathcal{L_{match}}$ force the encoder to preserve the matching relationship between graphs in the graph space by minimizing the discrepancy between the two transport plans. $\mathcal{L_{(im)strc}}$ guide the encoder to learn a representation retaining structural information inside the graph by calibrating the cost matrix of node representations. Also from the optimal transmission perspective, these two losses focus on the optimal transport plan and the transportation cost, respectively, which together determine the optimal transport distance.
> **Q2. Add an ablation study on the two proposed losses to observe which part contributes much.**
At the request of the reviewer here we perform loss ablation experiments. We compare the performance of the algorithms when using Eq. (8) and Eq. (9) alone as losses below
*Table IV. Ablation study on the losses $\mathcal{L_{match}}$ and $\mathcal{L_{(im)strc}}$.*
| Loss | Cora | CiteSeer | PROTEINS | MUTAG |
| :--------: | :---: | :--------: | :--------: | :--------: |
| $\mathcal{L_{match}}+\mathcal{L_{(im)strc}}$ | **0.842±0.30** | **0.743±0.18** | **0.769±0.18** | **0.911±1.27** |
| $\mathcal{L}_{match}$ | 0.820±0.36 | 0.689±0.34 | 0.758±0.21 | 0.879±1.14 |
| $\mathcal{L}_{(im)strc}$ | 0.812±0.25 | 0.684±0.24 | 0.762±0.23 | 0.869±1.21 |
From the table we can see that using only one loss alone leads to performance degradation, which verifies that each loss is indispensable. In addition we find that the plan matching loss $\mathcal{L_{match}}$ gives relatively better performance on most datasets compared to the implicit structure loss $\mathcal{L_{(im)strc}}$, which also suggests that the former may contribute more.
> **Q3. Provide the algorithms concerning how to compute the transport plan (e.g. Sinkhorn-Knopp).**
We describe the optimization algorithms below.
To solve the FGW problem of Eq. (6), we optimize the transport plan with the conditional gradient (CG) solver. The conditional gradient algorithm [1] consists in solving a linearization $\langle \mathbf{X}, \nabla_\pi \rangle$ at each iteration $r$. It can be solved by gradient descent with a direction $\mathbf{X}^{(r)} - \pi^{(r)}$, followed by a line search for the optimal step. The detail of the algorithm is summarized in Algorithm 1 in the Appendix.
To solve the Wassersttein problem of Eq. (7), in the paper we use the Sinkhorn-Knopp algorithm [2] to iteratively approximate the optimal solution $\pi_z^*$. Specifically,Sinkhorn-Knopp algorithm add an additional entropy regularizer and perform a scheme of alternating Sinkhorn projections: $\pi^{(0)}=\exp(-\boldsymbol{J}(\boldsymbol{Z}_1, \boldsymbol{Z}_2)/\lambda)$ and $\pi^{(t+1)} = \mathcal{S} (\mathcal{T}(\pi^{(t)}))$, where $t$ denotes the number of iterations, $\lambda$ weights the regularization, $\mathcal{S}(\pi)=\pi \oslash(\mathbf{1} \mathbf{1}^{\top} \pi) \odot(\mathbf{1} \boldsymbol{b}^{\top})$ and $\mathcal{T}(\pi)=\pi \oslash(\pi \mathbf{1} \mathbf{1}^{\top}) \odot(\boldsymbol{a} \mathbf{1}^{\top})$, $\odot$ denotes the Hadamard product and $\oslash$ denotes element-wise division. As shown by [2], in the limit this scheme converges to a minimizer $\pi^{(t)} \stackrel{t \rightarrow \infty}{\longrightarrow} \pi^*$.
[1] Revisiting frank-wolfe: Projection-free sparse convex optimization. ICML2013
[2] Sinkhorn distances: Lightspeed computation of optimal transport. NeurIPS2013
> **Q4. The authors use gradient descent to optimize the Eq. (12) instead of Sinkhorn-Knopp. I wonder whether the accuracy of those optimized methods will affect the results.**
Eq. (12) involves the Gromov-Wasserstein term rather than just the traditional Wasserstein term, and the Sinkhorn-Knopp algorithm is designed for Wassertein term and cannot be used to optimize the Gromov-Wasserstein term. We therefore use the conditional gradient (CG) solver, which often appears to optimize GW, to optimize Eq. (12). Thus, these two algorithms cannot be directly compared. Also as mentioned in the previous problem, for Eq. (7), which contains only Wasserstein terms, we optimize it with the Sinkhorn-Knopp algorithm.
---
Rebuttal Comment 1.1:
Title: Thanks for the review!
Comment: We extend our gratitude to the reviewer zpGL for acknowledging our work and providing us with valuable feedback.
**We will surely incorporate these comparisons, interpretations, and results in the final draft.** | Summary: In this paper, the authors study a new method "GALOPA" for self-supervised learning on graph.
For the two input views of graphs, they first compute the optimal transport plans w.r.t. fused GW distance between input graphs G1 and G2; for the corresponding outputs Z1=f(G1), Z2=f(Z2) of GNN f(), they compute the optimal transport plan w.r.t. a tunable cost J(Z1, Z2). They propose two losses to enforce the consistency between two plan matrices and two cost matrices.
They also propose some interesting findings through empirical experiments.
Strengths: - originality:
- This paper proposes a new type of loss based on **transport plan** for self-supervised learning on graph.
- quality:
- The new framework they propose does work and can provide comparable performance to existing deep kernel methods.
- clarity:
- They provide multiple illustrations to introduce the main ideas of different parts.
- significance:
- This paper provides a new type of loss, which can be a good reference for self-supervised learning on graph.
Weaknesses: - quality:
- The analysis of computational complexity is not provided. From my perspective it can be an issue since the cost for doing OT is high, especially for GW distance.
- The theoretical justifications for some claims are lacked. I also leave some questions below.
- Some empirical results may be unconvincing due to the small size of datasets. E.g., the experiments in Section 5-7.
- clarity:
- Some concepts are not clearly illustrated. See the questions below.
- significance:
- The lack of theoretical results and insignificant improvement can make the paper less attractive to audience in this theoretical field.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: ### Minor
1. The concept of “maximum similarity” in Line 35 is not well explained.
2. Could you further explain why it "is challenging for discrete graph structures to know (or assume) beforehand that the two views are positive/negative samples" in Line 92-93. Solving it seems an important contribution of the paper, while I don't get it here.
3. It is better to directly introduce the concept of fused GW distance in Section 3, and therefore better convey the idea in Section 4.1.
4. Some strong claims (in Section 5-7) are made based on experiments on small datasets. Can the authors add some medium to large datasets, e.g. from Open Graph Benchmark (OGB), to the experiments?
### Major
1. The implementation of "GALOPA" is not clearly introduced. For me, the authors propose two new losses, while the backbone can be any models for obtaining graph node embedding. I don't understand why in the experiments we contrast some backbone models (MLP, GCN) to "GALOPA".
2. As mentioned, the analysis of computational complexity is not provided. Please also compare it to common baseline methods.
3. The loss $L_{match}$ needs more discussions. Specifically, I feel it is non-trivial to discuss the differentiability of $\pi_z^*(Z_1, Z_2)$. It is hard to obtain the explicit form of the gradient of it w.r.t. $Z_i$, and I'm not sure whether the auto differentiation in common DL package can ensure the gradient in use will exactly be the real gradient of $\pi_z^*(Z_1, Z_2)$, considering the complex procedure of obtaining the OT plan.
4. Can you further explain the claim in Line 236 that "the OT distance between the optimal node representations Z∗ in Equation (10) is **equal** to the distance between its corresponding graphs".
- Eqn. (10) is composed of two losses, so it is better to specify more clearly the "OT distance between the optimal node representations".
- Can you show the derivation why the "OT distance" is "equal to the distance between its corresponding graphs"?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for constructive feedback and for describing our work as good reference for self-supervised learning and demonstrating interesting findings in the experiments. We respond to the reviewer’s concerns **below** and in the **global response above**.
> **Q1. Large scale data in Section 5-7?**
We performed the experiments in Sections 5-7 on the dataset ogb-arxiv, which contains 169,343 nodes with 1,166,243 edges. We first partition the graph and compute the plan between subgraphs. We **perform self-supervised pre-training** with all training data and **supervised fine-tuning with 10% of them** then evaluate on the test sets, which is repeated for 10 times.
In **Section 5**, we compare the plan and distance. Table I records the node classification accuracies on the ogb-arxiv when pre-training using the Eqs. (10) and (11) as losses.
*Table I. Plan versus distance on large-scale ogb-arxiv.*
| Algo. | Test |
| :--------: | :--------: |
| NoPretrain | 0.512±0.31 |
| Plan | **0.544±0.18** |
| Dist | 0.527±0.26 |
‘NoPretrain’ refers to direct fine-tuning without pre-training. We can find that using the plan as losses outperforms the counterpart using the distance, which is consistent with the conclusions in the paper.
In **Section 6**, we compare node attributes and edges. If $\sigma=1$, the model takes into account only node attributes. When $\sigma=0$, it integrates only edge. We set $\rho = 0$ (or $\rho \neq 0$) to remove (or add) the implicit structure term $\mathcal{L}_{(im)struc}$.
*Table II. Results on ogb-arxiv under different values of parameters.*
| Para | Test |
| :--------: | :--------: |
| $\rho=0, \sigma=1$ | 0.523±0.24 |
| $\rho \neq 0, \sigma=1$ | 0.542±0.37 |
| $\rho \neq 0, \sigma=0.5$ | 0.544±0.34 |
| $\rho \neq 0, \sigma=0$ | 0.540±0.25 |
From the table we can see similar conclusions to the paper.
In **Section 7**, we test the robusness of GALOPA. Table III records the performance when both feature masking (vertical axis) and edge perturbations (horizontal axis) are used.
*Table III. Results on ogb-arxiv with diffrent perturbation rate.*
| Aug Rate | 0.1 | 0.2 | 0.4 | 0.6 | 0.8 |
| :--------: | :---: | :--------: | :--------: | :--------: | :--------: |
| 0.1 | 0.542±0.23 | 0.543±0.36 | 0.540±0.32 | 0.538±0.31 | 0.532±0.26 |
| 0.2 | 0.543±0.30 | **0.544±0.27** | 0.544±0.18 | 0.543±0.35 | 0.538±0.45 |
| 0.4 | 0.539±0.16 | 0.541±0.33 | **0.544±0.29** | 0.542±0.17 | 0.540±0.39 |
| 0.6 | 0.534±0.38 | 0.539±0.19 | 0.543±0.26 | 0.542±0.32 | 0.539±0.19 |
| 0.8 | 0.531±0.23 | 0.535±0.25 | 0.539±0.19 | 0.537±0.36 | 0.537±0.41 |
We can find that our model is robust on ogb-arxiv.
> **Q2. Backbone (MLP, GCN) on experiment?**
In the experiments, our aim is not to contrast GALOPA with different backbones. Instead, we aim to assess the performance gap between GALOPA (with GNNs backbone) and established supervised methods (e.g., MLP algorithm [1] and GCN algorithm [2]). This comparison intends to reveal the performance difference between unsupervised GALOPA and the supervised approach.
[1] Graph attention networks. STAT2017
[2] Semi-supervised classification with graph convolutional networks. arXiv2016
> **Q3. Analysis of computational complexity.**
Reviewer can refer to the *GQ2* in the $\color{red}\text{global rebuttal}$.
> **Q4. Differentiability of** $\pi_Z^*(Z_1, Z_2)$.
It is difficult to derive the optimal closed-form solution for the Eq.(7) and the differentiation of $\pi_z^*(Z_1, Z_2)$ with respect to $Z_i$ directly. To solve the problem, we use the Sinkhorn-Knopp algorithm [1] to iteratively approximate the optimal solution $\pi_Z^*$, where the derivatives in each iteration step are solvable. Specifically, Sinkhorn-Knopp algorithm add an additional entropy regularizer and perform a scheme of alternating Sinkhorn projections: $\pi^{(0)}=\exp(-\boldsymbol{J}(\boldsymbol{Z}_1, \boldsymbol{Z}_2)/\lambda)$ and $\pi^{(t+1)} = \mathcal{S} (\mathcal{T}(\pi^{(t)}))$, where $t$ denotes the number of iterations, $\lambda$ weights the regularization, $\mathcal{S}(\pi)=\pi \oslash(\mathbf{1} \mathbf{1}^{\top} \pi) \odot(\mathbf{1} \boldsymbol{b}^{\top})$ and $\mathcal{T}(\pi)=\pi \oslash(\pi \mathbf{1} \mathbf{1}^{\top}) \odot(\boldsymbol{a} \mathbf{1}^{\top})$, $\odot$ denotes the Hadamard product and $\oslash$ denotes element-wise division. As shown by [1], in the limit this scheme converges to a minimizer $\pi^{(t)} \stackrel{t \rightarrow \infty}{\longrightarrow} \pi^*$. Hence, the differential $\partial\pi^{(t)}/\partial Z_i$ can be solved by the chain rule.
[1] Sinkhorn distances: Lightspeed computation of optimal transport. NeurIPS2013
> **Q5. Can you explain the claim that "the OT distance between the optimal node representations** $\boldsymbol{Z}^*$ **in Eq. (10) is equal to the distance between its corresponding graphs" (Line 236)?**
We apologize for the distress caused by our lack of explanation here. Eq. (10) is obtained by adding the two loss (Eq. 9 and 8), which constrain the optimal node representation $\boldsymbol{Z_i}^*$ to satisfy $\boldsymbol{J}(\boldsymbol{Z_1}^*, \boldsymbol{Z_2}^*)=\sigma \boldsymbol{K}(\boldsymbol{X_1}, \boldsymbol{X_2})+(1-\sigma) \boldsymbol{L}(\boldsymbol{A_1}, \boldsymbol{A_2}) \otimes \pi_\mathcal{G}^*$ and $\pi_\mathcal{Z}^*(\boldsymbol{Z_1}^*, \boldsymbol{Z_2}^*) = \pi_\mathcal{G}^*$, respectively. Hence, the OT distance between the optimal node representations $\mathcal{W}(\boldsymbol{Z_1}^*, \boldsymbol{Z_2}^*)=\langle\boldsymbol{J}(\boldsymbol{Z_1}^*, \boldsymbol{Z_2}^*), \pi_\mathcal{Z}^*(\boldsymbol{Z_1}^*, \boldsymbol{Z_2}^*)\rangle$ is equal to the distance between its corresponding graphs $\mathcal{W}_G(\mathcal{G}_1, \mathcal{G}_2)=\langle\sigma \boldsymbol{K}(\boldsymbol{X_1}, \boldsymbol{X_2})+(1-\sigma) \boldsymbol{L}(\boldsymbol{A_1}, \boldsymbol{A_2}) \otimes \pi^*_G,\ \pi^*_G\rangle$.
We place the remaining 3 questions Q6-Q8 below.
---
Rebuttal 2:
Title: We place the remaining 3 questions Q6-Q8 below.
Comment: > **Q6. The explantation of “maximum similarity” in Line 35?**
"Maximize similarity" in line 35 refers to maximizing the agreement score between two graphs. The agreement is usually measured by similarity score, such as inner product, between two representations. Given training graphs, graph contrastive learning aims to learn graph encoder such that representations of similar graphs (i.e., original and augmented graph) agree with each other.
> **Q7. Why it "is challenging for discrete graph structures to know (or assume) beforehand that the two views are positive/negative samples" in Line 92-93?**
There exist some graphs, such as molecular graph, whose label are very sensitive to perturbation/corruption. In other words, even with a very slight perturbation, the property (label) of the perturbed graph may change with respect to the original graph. Thus, it is difficult to determine whether it is a positive or negative sample after the perturbation. For example, in a molecular activity classification task, the activity (i.e., label) of a molecular graph may come from a certain functional group. A slight perturbation of this functional group can result in molecular inactivation, i.e., the label of the augmented molecular graph is changed relative to the original molecular graph. For such cases, graph contrastive learning will learn similar representation for semantically dissimilar graphs.
> **Q8. It is better to directly introduce the concept of fused GW distance in Section 3.**
Thanks to the reviewer's suggestion, we will include a description of the FGW distance in Section 3.
---
Rebuttal 3:
Comment: Thanks for the clarification in the author response. I still have two remaining concerns as follows.
For the computational complexity, the author proposed some resolutions to reduce the cost.
- Is it always the case that we can know the so-called matching prior of the two graphs $G_1, G_2$ involved in the computation? I can understand we may know that for augmented data samples, while what if $G_1, G_2$ are just two samples under different labels and are quite different?
- The fast algorithms must have some cost, while I'm not sure how the cost of the fast algorithms would influence the proposed model/loss.
- It might be more convincing if the new experiments on ogbn-arxiv can be timed for both the proposed method and related baselines.
For the usage of $L_{match}$,
- Do you in practice use the gradient from $\partial \pi^{(T)} / \partial Z_i$ to update the parameters in GNN $f()$?
- If yes, even we know $\pi^{(T)} \to \pi^*$, how would you show the gradient $\partial \pi^{(T)} / \partial Z_i \to \partial \pi^* / \partial Z_i$?
---
Rebuttal Comment 3.1:
Title: Response (1/3) to Reviewer YJ1t
Comment: We're glad to hear that we have addressed some of the reviewer's concerns, and we'll respond to the rest of the reviewer's questions below.
> **Q9-1. Is it always the case that we can know the matching prior of the two graphs $G_1$, $G_2$ involved in the computation? What if $G_1$, $G_2$ are just two samples are different?**
We greatly appreciate the reviewer's consideration of the matching prior aspect in our proposed methodology. In our previous response, we suggested several ways to reduce the computational expense. If $G_1$ and $G_2$ are distinct samples with diverse labels, the other solutions, which do not rely on prior knowledge of the relationship between $G_1$ and $G_2$, can still work effectively.
Specifically, for dataset comprising a sole graph (e.g., citation network, social network), we need to use augmentation strategies to obtain multiple samples. The matching prior between these graphs is known in this scenario.
For datasets encompassing multiple graphs, our model adapts well to scenarios involving augmented or entirely distinct graphs. Let's delve into the scenario of employing two entirely distinct graphs:
1) In cases where the datasets contain small to modest-sized graphs (e.g., with an average of fewer than 1000 nodes per graph), the computational complexity associated with matching is negligible. Here, the original model can be effectively utilized.
2) Furthermore, for datasets that comprise large graphs, alternative strategies are available to mitigate computational complexity. Techniques such as the utilization of only node attributes (point 2 in **GQ2**), graph partitioning, or the linear optimal transport algorithm (point 3 in **GQ2**), offer avenues for reducing time complexity. It's crucial to highlight that these approaches don't necessitate prior matching information of the two graphs, rendering them versatile options.
---
Rebuttal Comment 3.2:
Title: Response (2/3) to Reviewer YJ1t
Comment: > **Q9-2. I'm not sure how the cost of the fast algorithms would influence the proposed model/loss?**
As rightly pointed out by the reviewer, fast algorithms do indeed come with their associated trade-offs. Efficiency gains may entail some compromise in precision. In light of this observation, our subsequent comparison experiments (Tables (1)-(5)) illustrate that the performance trade-off resulting from these fast algorithms is minor when compared to the significant time savings they offer. This holds particularly true for medium to large datasets. Thus, the cost incurred by the use of fast algorithms is judiciously balanced against the benefits they provide.
To address the reviewer's concerns and provide a more comprehensive evaluation, we've devised a variant algorithm known as GALOPA(linear). This variant focuses on computing the transport plan solely based on the node attributes and employs the linear Sinkhorn algorithm [1] to optimize in both the graph space and representation space. The original version of our model retains the name GALOPA(cube).
We have meticulously conducted experiments, maintaining the experimental settings outlined in the paper, across node and graph classification datasets. The performance outcomes are diligently recorded, and the results are presented in the subsequent table for your reference.
From the results on Tables (1)-(2) we can observe that the variant GALOPA(linear) exhibits comparable performance with the GALOPA(cube), especially on median/large graphs such as the Ama-Photo (with 7,650 nodes), PubMed (19,717 nodes), Coauthor-CS (18,333 nodes), and Amz-Comp. (13,752 nodes). In some cases, GALOPA(linear) performs better than GALOPA(cube) because the neural networks may get stuck at local optima resulting in a slight difference in performance, which is a side note to the good performance of GALOPA(linear).
*Table (1): Node classification accuracy (%) for GALOPA(cube) and GALOPA(linear).*
| Models | Cora | CiteSeer | PubMed | WiKiCS | Amp-Comp. | Amp-Photo | Coauthor-CS |
| :--------: | :---: | :--------: | :--------: | :--------: | ---------- | ---------- | ---------- |
| **GALOPA(cube)** | 84.21±0.30 | 74.34±0.18 | 84.57±0.34 | 81.23±0.19 | 88.65±0.11 | 92.77±0.40 | 93.04±0.25 |
| **GALOPA(linear)** | 82.73±0.29 | 72.12±0.35 | 84.39±0.19 | 81.15±0.39 | 88.49±0.17 | 92.82±0.27 | 92.76±0.22 |
*Table (2): Graph classification accuracy (%) for GALOPA(cube) and GALOPA(linear).*
| Models | PROTEINS | DD | MUTAG | NCI1 | COLLAB | IMDB-B |
| :--------: | :---: | :--------: | :--------: | :--------: | ---------- | ---------- |
| **GALOPA(cube)** | 76.93±0.18 | 83.87±0.42 | 91.11±1.27 | 77.86±0.36 | 73.20±0.37 | 70.72±0.48 |
| **GALOPA(linear)** | 76.77±0.32 | 82.39±0.45 | 90.88±1.29 | 76.59±0.24 | 73.33±0.41 | 70.71±0.39 |
Additionally, we count the average elapsed time per epoch for training these two models on all datasets.
Note that all the experiments are conducted and runtimes are recorded on the *same hardware environment* as stated in the paper. The results are shown in the table below. These tables underscore the substantial reduction in time consumption associated with GALOPA(linear) compared to GALOPA(cube), especially evident in medium to large datasets such as PubMed, Amp-Comp., DD, etc.
*Table (3): The average elapsed time per epoch of the models on Node classification datasets.*
| Models | Cora | CiteSeer | PubMed | WiKiCS | Amp-Comp. | Amp-Photo | Coauthor-CS |
| :--------: | ----: | ---------: | ---------: | ---------: | ---------: | ---------: | ---------: |
| **GALOPA(cube)** | 1.53s | 2.18s | 74.58s | 25.66s | 34.73s | 11.60s | 71.17s |
| **GALOPA(linear)** | 0.29s | 0.80s | 10.20s | 3.14s | 5.24s | 1.66s | 32.03s |
*Table (4): The average elapsed time per epoch of the models on Graph classification datasets.*
| Models | PROTEINS | DD | MUTAG | NCI1 | COLLAB | IMDB-B |
| :--------: | ----: | ---------: | ---------: | ---------: | ---------: | ---------: |
| **GALOPA(cube)** | 7.05s | 300.50s | 1.21s | 18.15s | 74.64s | 3.59s |
| **GALOPA(linear)** | 3.36s | 20.07s | 0.47s | 11.75s | 21.46s | 2.76s |
[1] Linear time sinkhorn divergences using positive features. NeurIPS2020.
---
Rebuttal Comment 3.3:
Title: Response (3/3) to Reviewer YJ1t
Comment: > **Q9-3. It might be more convincing if the new experiments on ogbn-arxiv can be timed for both the proposed method and related baselines.**
Thanks to the suggestion of the reviewer, we record in the following table the average elapsed time per epoch taken to pre-train on the ogbn-arxiv with the algorithm GALOPA(cube), the variant algorithm GALOPA(linear), and the baseline BGRL (with linear complexity), and the fine-tuning accuracies obtained on the supervised algorithms.
All the experiments are conducted and runtimes are recorded on the *identical hardware environment* as described in the paper.
Note that for GALOPA(cube) and GALOPA(linear) we first partition the graph and compute the plan between subgraphs, where the average size of each subgraph is ~5000 nodes.
The results of our experiments on the ogbn-arxiv dataset, presented in the following table, showcase the substantial reduction in running time achieved through the implementation of the complexity reduction approach. Importantly, this efficiency enhancement is coupled with comparable performance to the original model.
*Table (5): The average elapsed time per epoch for training the models and the fine-tuning accuracies on ogbn-arxiv.*
| Models | Time | Test Accuracy |
| :--------: | :---: | :--------: |
| **GALOPA(cube)** | 60.4s | 0.544±0.18 |
| **GALOPA(linear)** | 2.79s | 0.541±0.29 |
| **BGRL** | 1.02s | 0.535±0.19 |
> **Q10. Do you in practice use the gradient from** $\nabla_{Z_i}\pi^{(T)}$ **to update the parameters in GNN** $f()$**? If yes, even we know** $\pi^{(T)} \rightarrow \pi^*$**, how would you show the gradient** $\nabla_{Z_i} \pi^{(T)} \stackrel{T \rightarrow \infty}{\longrightarrow} \nabla_{Z_i} \pi^*$**?**
We extend our gratitude to the reviewers for providing us with the opportunity to address this query. We use the Sinkhorn-Knopp algorithm for optimization and compute the gradient $\nabla_{Z_i}\pi^{(T)}$ using backpropagation to update the parameters of GNNs in the optimization progress.
It's noteworthy that extensive research has been conducted concerning the convergence properties of this differential mechanism. Recent advancements in this domain are highlighted in the paper [2], wherein theoretical proofs have been established. Theorem 3.3 of [2] imples that $\pi^{(T)}$ is continuously differentiable for all $T$ and the sequence of derivatives $\nabla_{Z_i}\pi^{(T)}$ **converges** at a linear rate. In particular, for all $Z_i$, $\nabla_{Z_i}\pi^{(T)} \stackrel{T \rightarrow \infty}{\longrightarrow} \nabla_{Z_i}\pi^*$, where $Z_i$ is the variable of the cost matrix $\mathbf{C}(Z_i)$ and it corresponds to $\theta$ in the original text of [2]. $\mathbf{P}$ in [2] refers to the transport plan $\pi$.
[1] Linear time sinkhorn divergences using positive features. NeurIPS2020.
[2] The derivatives of sinkhorn–knopp converge. SIAM Journal on Optimization2023.
**We really appreciate your valuable comments which help improve the quality of our manuscript, and we will add these comparisons and results to the final version.**
---
Rebuttal 4:
Comment: Many thanks for the new response. I would like to raise my ratings from 5 to 6 and encourage the authors to incorporate the discussion into the next revision.
---
Rebuttal Comment 4.1:
Title: Thank you!
Comment: Thank you very much for the update and your helpful suggestion, we'll add them to our next revision. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their insightful feedback.
Here we respond to common/main concerns raised by reviewers.
> **GQ1. The algorithms to compute the FGW (Eq. (6)) between graphs and the OT distance (Eq. (7)) between node embeddings. (zpGL, bARL)**
We describe the optimization algorithms for both of these terms below, which were previously mentioned in Appendix B of the original submission.
To solve the FGW problem of Eq. (6), we optimize the transport plan with the conditional gradient (CG) solver. The conditional gradient algorithm [1] consists in solving a linearization $\langle \mathbf{X}, \nabla_\pi \rangle$ at each iteration $r$. It can be solved by gradient descent with a direction $\mathbf{X}^{(r)} - \pi^{(r)}$, followed by a line search for the optimal step. The detail of the algorithm is summarized in Algorithm 1 in the Appendix.
To solve the Wassersttein problem of Eq. (7), in the paper we use the Sinkhorn-Knopp algorithm [2] to iteratively approximate the optimal solution $\pi_z^*$. Specifically, Sinkhorn-Knopp algorithm add an additional entropy regularizer and perform a scheme of alternating Sinkhorn projections: $\pi^{(0)}=\exp(-\boldsymbol{J}(\boldsymbol{Z}_1, \boldsymbol{Z}_2)/\lambda)$ and $\pi^{(t+1)} = \mathcal{S} (\mathcal{T}(\pi^{(t)}))$, where $t$ denotes the number of iterations, $\lambda$ weights the regularization, $\mathcal{S}(\pi)=\pi \oslash(\mathbf{1} \mathbf{1}^{\top} \pi) \odot(\mathbf{1} \boldsymbol{b}^{\top})$ and $\mathcal{T}(\pi)=\pi \oslash(\pi \mathbf{1} \mathbf{1}^{\top}) \odot(\boldsymbol{a} \mathbf{1}^{\top})$, $\odot$ denotes the Hadamard product and $\oslash$ denotes element-wise division. As shown by [2], in the limit this scheme converges to a minimizer $\pi^{(t)} \stackrel{t \rightarrow \infty}{\longrightarrow} \pi^*$.
[1] Revisiting frank-wolfe: Projection-free sparse convex optimization. ICML2013
[2] Sinkhorn distances: Lightspeed computation of optimal transport. NeurIPS2013
> **GQ2. The analysis and reducing of computational complexity. (YJ1t, bARL)**
**Analysis of computational complexity:** We provided a time complexity analysis in the appendix of our submission. We describe it in more detail below plus some additional comparisons with baseline algorithms: The time complexity of the GALOPA comes mainly from the optimization of Eqs. (6) and (7). For Eq. (6) containing Fused Gromov Wasserstein term, we use conditional gradient (CG) solver for optimization, which requires the computation of a gradient with near-cubic $\mathcal{O}(n^3)$ time complexity at each iteration, where $n$ denotes the size of graph, i.e., the number of nodes. For Eq. (7) with the Wasserstein term, we can use Sinkhorn-Knopp algorithm for time efficient with near-square $\mathcal{O}(n^2)$ complexity.
**Reducing complexity**: To reduce the time complexity, we utilize the properties of the proposed model and/or the scaling optimal transport techniques that can reduce the time complexity from $\color{red}{\mathcal{O}(n^3)}$ to $\color{red}{\mathcal{O}(n^2)}$ or even to $\color{red}{\mathcal{O}(n)}$, we provide 4 ways to do this below:
1) Unlike general OT settings, where the two graphs are typically quite different and the matching relationship between them is completely unknown, the **difference between the original and augmented graphs** in GALOPA **is quite small** and the **matching relations for subgraph component** except with difference part (i.e., complementary set of difference part) **is known**. This means that we can utilize the *matching prior* to reduce the computational cost. Hence, we can split the difference part with its neighborhood from the two graphs and compute the optimal transport plan only for that part. Since the percentage of that part is very small, it can greatly reduce the time complexity. For example, with the perturbation rate is set to 1%, the time complexity of two million-sized graphs ($(10^{6})^3$ or $(10^{6})^2$) is directly reduced by 6 (or 4) orders of magnitude to $(10^6)^2$ or $(10^{6})^{1.3}$.
2) According to the observation in Section 6, we can avoid the cubic complexity $\mathcal{O}(n^3)$ of optimizing GW by **using only the node attributes** for computing the optimal plan in graph space, while retaining similar performance with near-square time complexity $\mathcal{O}(n^2)$.
3) Alternatively, we can reduce the computational cost by utilizing sparsity [1] or graph partitioning [2, 3]. In particular, we can employ the most recent work on linear optimal transport [4, 5], which compute FGW term and/or Wasserstein term in **linear time** $\mathcal{O}(n)$.
4) We have the option to combine the aforementioned methods. For instance, by merging insights from point 1, a significant portion of subgraph pairs acquired via graph partitioning methods in point 3 turns out to be **identical**. This realization can further pare down the complexity of graph partitioning methods.
[1] Efficient approximation of gromov-wasserstein distance using importance sparsification. JCGS2023
[2] Scalable gromov-wasserstein learning for graph partitioning and matching. NeurIPS2019
[3] Quantized gromov-wasserstein. PKDD2021
[4] Computing wasserstein-p distance between images with linear cost. CVPR2022
[5] On a linear fused gromov-wasserstein distance for graph structured data. PR2023
**Complexity of GCL:** We further analysis the time complexity of the contrastive learning algorithm. The contrastive loss (e.g., InfoNCE loss) computes all-pair distance for nodes as negative pairs which induces quadratic time complexity $\mathcal{O}(n^2)$ with respect to the graph size. Thus, general graph contrastive learning algorithms (e.g., GRACE, MVGRL, GraphCL, etc.) have quadratic time complexity $\mathcal{O}(n^2)$ with respect to graph size. BGRL introduced loss that does not require negative pairs with linear computation cost $\mathcal{O}(n)$. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Private Federated Frequency Estimation: Adapting to the Hardness of the Instance | Accept (poster) | Summary: In the nascent area of 'federated analytics', the canonical problem is frequency estimation: each client holds a label (or a collection of labels), and the aim is to recover the frequency distribution of these labels, or at least information on the most frequent labels (heavy hitters). Prior work has demonstrated that this problem can be tackled by making use of 'sketch' data structures designed over the last few years. The question is now to understand how to optimize the accuracy-communication tradeoff, while meeting various security and privacy guarantees. This paper builds on prior work in this direction, by arguing that better bounds can be achieved due to the skewed nature of the input data: essentially, the worst case for sketches is when the data is uniform, and better accuracy is seen when most mass is in the head of the label frequency distribution. Results for this case follow in part due to prior analysis of such data structures, adapted to the parameters of the federated setting. The second results apply to the case when data is collected in multiple rounds, by comparing the effect of using the same randomly chosen sketch parameters versus varying some. The results show that this achieves improved bounds in the case that the frequency distribution changes between rounds. Experiments based on simulated data allocations quantify this further.
Strengths: This is a foundational problem for the area of federated computing, and the paper shows some improved results which could be of use to practitioners.
The algorithms proposed are clear and suitable to be implemented. It is straightforward to achieve differential privacy by noise addition to the sketch (under secure aggregation), with bounded impact on accuracy.
Weaknesses: The novel contribution is not extremely high. The first set of results are shown by plugging parameters into theorems from prior work, and some manipulation of probabilities. The second set of results are more involved, but can still be viewed as adaptations of prior proofs. From a technical perspective, there is not much excitement. The potential practical implications could elevate this work, but this would require more effort to demonstrate that the real world scenarios where this is needed map on to the assumptions in this work.
The results separate best from the single sketch approach when the data is 'heterogenous'. The paper could do more to define the exact model of heterogeneity assumed, and to test these on real data. The main opportunity for hybrid sketch to shine (P8) is when the frequency vectors are heterogenous -- that is, when the frequency distribution is different during each round of data collection. (Note that most commonly, in the federated setting, heterogenous is used to refer to the case when the allocating of data to each client can vary a lot. However, the definition of the frequency estimation problem is oblivious to the allocation of labels to clients: it just asks for the global frequency distribution). It would be helpful to know what kind of scenarios expect to have different frequency distributions per round, and what are the implications of this for the frequency estimation problem -- when the data is constantly shifting, what should the ground truth frequency distribution be?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Are there any good data sets or references that can help to argue that the heterogeneity vectors F are substantially smaller (in Euclidean norm) than the frequency vectors?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No limitations in the send of negative societal impact
Could say more about technical limitations of the work, or future work -- the conclusion just summarizes the results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions. In the following, we will address your questions.
---
**Q1**. “The novel contribution is not extremely high. The first set of results are shown by plugging parameters into theorems from prior work, and some manipulation of probabilities. The second set of results are more involved, but can still be viewed as adaptations of prior proofs. From a technical perspective, there is not much excitement. The potential practical implications could elevate this work, but this would require more effort to demonstrate that the real world scenarios where this is needed map on to the assumptions in this work.”
**A1**. We use some standard techniques to derive our bounds. However, we’d like to emphasize that both the considered multi-round FFE problem settings and results are new. It also requires non-trivial efforts to establish new effective algorithms such as Hybrid Sketch, to allow the sketching method to adapt to the hardness of the instance, and to make the algorithm differentially private.
---
**Q2**. “The results separate best from the single sketch approach when the data is 'heterogenous'. The paper could do more to define the exact model of heterogeneity assumed, and to test these on real data. The main opportunity for hybrid sketch to shine (P8) is when the frequency vectors are heterogenous -- that is, when the frequency distribution is different during each round of data collection. (Note that most commonly, in the federated setting, heterogenous is used to refer to the case when the allocating of data to each client can vary a lot. However, the definition of the frequency estimation problem is oblivious to the allocation of labels to clients: it just asks for the global frequency distribution). It would be helpful to know what kind of scenarios expect to have different frequency distributions per round, and what are the implications of this for the frequency estimation problem -- when the data is constantly shifting, what should the ground truth frequency distribution be?”
**A2**. We'd like to make two clarifications regarding our results.
Firstly, Hybrid Sketch separates best from Shared Sketch when the multi-round dataset is *homogenous* rather than heterogenous (see the discussions in lines 247-264). Therefore, the main opportunity for Hybrid Sketch to shine is when local frequency vectors are closer to homogenous rather than heterogeneous.
Secondly, our definition of (single-round) frequency estimation problems only asks for a frequency vector. However, our definition of multi-round frequency estimation problems asks for local frequency vectors in each round instead of only a global frequency vector (this can be seen from Theorem 3.2). So the latter problem is *not* oblivious to data allocation.
We hope we have clarified the questions. Please let us be aware if there are any further concerns.
The online setting where data is constantly shifting is very interesting. However, our main focus in this work is an ‘’offline’’ setting where the number of rounds $M$ is fixed (and known). We will comment on the online setting as a future work.
---
**Q3**. “Are there any good data sets or references that can help to argue that the heterogeneity vectors F are substantially smaller (in Euclidean norm) than the frequency vectors?”
**A3**. In an ideal case where the local frequency vectors are perfectly homogenous, we can show that (see lines 247-264)
$$ F_i = \frac{1}{\sqrt{M}} f_i,$$
where $M$ is the number of rounds. So the heterogeneity vector is only $1/\sqrt{M}$ of the global frequency vector, pointwise, hence also in Euclidean norm. In practice, the multi-round datasets are often *approximately* homogeneous, so we expect the heterogeneity vector to be substantially smaller than the global frequency vector.
---
Rebuttal Comment 1.1:
Comment: Thank you for the careful response, and the clarification around the interpretation of heterogeneity in this work. The empirical results show that there is a clearer separation when there is a lot of skew in the frequency distribution (Fig 2b and 2c) -- it would be good to comment on what kind of real world inputs display such skew. I've updated my score as the responses have removed some of my uncertainties.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you for updating your rating and confirming that some uncertainties have been clarified!
In many real-world datasets, the frequency vector displays skewness. For example, the Zipf’s law [15] states that in practice, a list of measured values often decays at a polynomial rate when sorted. We will highlight the skewness of real-world datasets more in the revision. | Summary: The paper considers the problem of federated frequency estimation under the Secure Summation constraint. Motivated by the tail-bound analysis of CountSketch, the authors propose a two-round communication approach that can significantly reduce the sketch size and improve the overall communication size. Later, the authors extend the algorithm to a multi-round HybridSketch algorithm and improve the protocol's scalability. This paper also carefully analyzed their proposed method's trade-offs between accuracy and privacy. The proposed algorithm shows performance improvements in real-world datasets (Gowalla and Colossal Clean Crawled Corpus).
Strengths: Federated frequency estimation is a very important task in federated analytic and has many applications. The main idea is intuitive. Many real-world data distributions exhibit skewness, while the chen et al. showcases the lower bound for this task is $nlogd$ bits of communication per client, this work takes the distribution into account to reduce the communication costs.
Weaknesses: The experiments showcase good estimation accuracy improvement. One may wonder how the communication latency changes between the baselines and hybrid sketch.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What is the difference in communication latency between the proposed algorithm vs. baselines?
Use different ticks or line styles to better distinguish between the lines in figures.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive evaluation! Please find our answers to your specific questions below.
---
**Q1**. “What is the difference in communication latency between the proposed algorithm vs. baselines?”
**A1**. In our experiment, the client/server communication is simulated so the latency is ignorable. In practice, we expect that the communication latency is proportional to the sketch size (which determines the message length). So the difference between our proposed method vs. baselines in the sketch size vs. error plots (see Figure 1(b)(c)(e)(f), Figure 2, and Figure 3) reflects their communication latency difference in practical applications. For example, in Figure 3, the dashed gray lines show that for a fixed error, the hybrid sketch size is much smaller than the fresh and shared design.
---
**Q2**. “Use different ticks or line styles to better distinguish between the lines in figures.”
**A2**. We will revise the plots as you suggested. | Summary: This paper tackles the problem of Federated Frequency Estimation (FFE) by leveraging the Federated CountSketch algorithm (Algorithm 2 in this paper). Compared to previous FFE methods the authors argue that if the tail of the underlying frequencies is light or small (\emph{i.e., } the smaller frequencies have considerably smaller values). Furthermore, such a lighter tail provides two benefits, (1) The FFE error decreases and (2) A smaller width can be afforded in the Federated CountSketch algorithm that leads to a direct reduction in the client communication costs. Under a Federated setting, both advantages are meaningful. In the first half of the paper, the authors provide ways to leverage this lighter tail in the Federated setting. While in the second half, the authors provide FFE algorithms that work for a large number of clients by running CountSketch over multiple groups of clients. Finally, the paper also provides a Differential Privacy (DP) guarantee and estimation error while running the FFE algorithm over multiple rounds.
Overall, the authors provide a clear and unique analysis that furthers the FFE literature by reduction of the estimation error and sketch width when leveraging the (almost optimal) CountSketch algorithm. The empirical evaluations provide a complete picture of the impact of the suggested method. Minor comments regarding the two-phase method, empirical evaluations, and DP section have been provided below.
Strengths: The paper tackles several challenges in the FFE literature. Primarily the authors demonstrate FFE algorithms for the cases when (1) the number of clients is low to moderate and (2) for a high number of clients. They also tackle the case when Differential Privacy (DP) is introduced in FFE deployments.
1. The core idea that the authors leverage is the reduction in the estimation error while using the CountSketch algorithm when the underlying frequencies have a lighter tail. The authors argue that when the data is distributed in such a (lighter-tail) fashion, the previous estimation error bounds are unable to leverage these tighter bounds to provide real-world benefits. Using Corollaries 2.2-2.4 the authors demonstrate a reduction in the estimation error and the sketch width and thus the communication cost of the clients.
2. Furthermore, the paper suggests that when the clients are high in number, simply computing CountSketch over a single round might not be feasible. They demonstrate that the naive application of CountSketch in the multi-round setup is not enough. Thus, they suggest a new algorithm “Hybrid Sketch, " extending CountSketch for the multi-round setup. The Hybrid Sketch algorithm by maintaining the same hash buckets can combine the analysis of frequency estimation across rounds rather than analyze each separately.
3. Finally, a DP alternative of Hybrid Sketch is provided that follows similarly to previous approaches.
Weaknesses: 1. For the two-phase method in section 2, the authors suggest that frequencies follow the Zipf law; thus, they estimate the polynomial coefficients by a kind of pilot study on a small number of clients. It is however unclear how the instance optimal method can leverage Eqn. (2) with the unknown frequencies to estimate the sketch width. Is the two-phase method employed for the instance optimal method as well? Please consider adding relevant clarification for all three methods.
2. For the DP analysis consider mentioning the exact mechanism used to add noise. At first glance, it might not be clear if the method employs local DP or central DP.
3. In certain cases, the idea that the frequencies have a lighter tail might not be verifiable (before running the sketching algorithm). Such an instance may occur if the communication or DP budget is constrained or if there are not enough clients to handle both the pilot and the actual sketching algorithm under DP constraints. Thus, adding this limitation clearly toward the end of the article will add clarity.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The paper is well-written and conveys interesting results for performing Federated Frequency estimation over various scenarios. I do not have major concerns regarding the paper. A few minor suggestions have been pointed out in the weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The primary limitation of the proposed method has been provided in point 3 of the weaknesses section. The authors can consider including a limitations/discussion section that highlights such challenges.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for supporting our paper! We will address your comments as follows.
---
**Q1**. “For the two-phase method in section 2, the authors suggest that frequencies follow the Zipf law; thus, they estimate the polynomial coefficients by a kind of pilot study on a small number of clients. It is however unclear how the instance optimal method can leverage Eqn. (2) with the unknown frequencies to estimate the sketch width. Is the two-phase method employed for the instance optimal method as well? Please consider adding relevant clarification for all three methods.”
**A1**. In the instance optimal method, the sketch size is computed based on eq. (2) by an oracle. The instance optimal method is for demonstrating the correctness of our theoretical understanding and is not a practical method. We will provide detailed clarifications for all three methods in the revision.
---
**Q2**. “For the DP analysis consider mentioning the exact mechanism used to add noise. At first glance, it might not be clear if the method employs local DP or central DP.”
**A2**. We employ central DP. We will clarify this in the revision.
---
**Q3**. “In certain cases, the idea that the frequencies have a lighter tail might not be verifiable (before running the sketching algorithm). Such an instance may occur if the communication or DP budget is constrained or if there are not enough clients to handle both the pilot and the actual sketching algorithm under DP constraints. Thus, adding this limitation clearly toward the end of the article will add clarity.”
**A3**. We agree with this limitation and will clarify this in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking the time to address my concerns. I believe the paper adequately addresses all of my concerns including the additional minor ones. Therefore, I will keep the existing rating of the paper unchanged. | Summary: This paper explores several variants of the count sketch method for federated frequency estimation (FFE).
- With only one communication, they provide a refined instance-dependent analysis for CountSketch and find that the sketch size depends on unknown problem-dependent quantities.
They then propose a two-phase approach to first learn these quantities first and then perform FFE.
- They also consider the case where multiple communication is available, they explore several variants of count sketch methods to utilize the multiple communication rounds, provide theoretical analysis, and conduct numerical experiments to verify their findings.
- They also explore a differential private extension.
======= after rebuttal =======
I have read the author's rebuttal which addresses most of my concerns.
The remaining issue from my perspective is the lack of an instance-dependent lower bound which is left as a future work, as claimed in the rebuttal.
Hence, I tend to increase the point.
Strengths: They conduct a systematic investigation of FFE problems and provide theoretical analysis.
The paper is well-written and easy to follow.
Weaknesses: The paper tries to study a lot of stuff, which, however, left many questions untouched. See the Questions part.
There seems not much novelty in the theoretical analysis.
The benefit of instance dependence on the estimation guarantee is not well illustrated.
Though the author provides a lot of numerical experiments to validate their theoretical predictions, it is still unclear how the proposed method would affect practical training where (overparameterized) neural networks are used.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. From Figure 1 (a)(d), I find that given a target $\ell_{\infty}$ error, the minimax optimal curve often achieves a smaller $\ell_{\infty}$ error than the instance optimal one, though the latter aligns with the straight line $y=x$ better. It seems that the minimax optimal one is better than the instance optimal one in the achieved accuracy, which is quite counterintuitive. Is there anything I missed? Could the author provide some explanation?
2. Could the author provide some intuitive explanation about why Hybrid Sketch could be better than Fresh Sketch? Note that Hybrid Sketch shares a set of bucket hashes but uses independent sets of sign hashes. Why this dependence on shared bucket hashes is better than independent bucket hashes? What about independent bucket hashes and shared sign hashes?
3. Is it possible to provide any lower bound to show the instance-dependent upper bounds are tight?
4. Line 316, when $n$ is fixed, why increasing $M$ improves the estimation error? In Theorem 4.1, the estimation error $\sqrt{\frac{\sum_{i>W}\left(F_i^*\right)^2}{W}}$ seems to have nothing to do with $M$ in the worst case.
5. Is it possible to extend the methods developed in this paper to empirical risk minimization where gradients of nonconvex models (such as neural networks) are used?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions. We address your concerns as follows.
---
**Q1**. “From Figure 1 (a)(d), … It seems that the minimax optimal one is better than the instance optimal one in the achieved accuracy, which is quite counterintuitive. Is there anything I missed? Could the author provide some explanation?”
**A1**. We respectfully point out that Figures 1(a) and (d) imply that the minimax optimal curve is **worse** than the instance optimal curve in terms of computing a minimal necessary sketch size. Observe that the minimax optimal curve archives a smaller error than the instance optimal curve, while both are smaller than the targeted error. Note that Count Sketch with a larger sketch size archives a smaller error. So Figures 1(a) and (d) show that the minimax optimal method (eq. (3)) computes an unnecessarily large sketch size to achieve a targeted error, causing a waste of bandwidth, while the instance optimal method (eq. (2)) computes a more accurate sketch size. This is also illustrated in Figures 1(b) and (e).
---
**Q2**. “Could the author provide some intuitive explanation about why Hybrid Sketch could be better than Fresh Sketch? Note that Hybrid Sketch shares a set of bucket hashes but uses independent sets of sign hashes. Why this dependence on shared bucket hashes is better than independent bucket hashes? What about independent bucket hashes and shared sign hashes?”
**A2**. Fresh Sketch computes an “average of medians”, that is, produces well-concentrated estimates of each local frequency independently and then takes the average to produce an estimate of the global frequency. In comparison, Hybrid Sketch computes a “median of averages”, that is, 1) produces high-variance estimators for each local frequency, then 2) averages them to obtain low-variance estimators for global frequency, and finally 3) uses the median trick to amplify success probability.
Fresh Sketch is less effective than Hybrid Sketch because the average of independent well-concentrated estimates is less well-concentrated compared to the median of independent low-variance estimates. This is because, for a good event (where the error is small) to happen in the former method, multiple independent good events have to happen simultaneously, which leads to an invocation of the union bound. In contrast, the good event in the latter method only fails with at most exponentially small probability.
In Step 2) of Hybrid Sketch, we use fresh sign hashes and shared bucket hashes to reduce estimation variance. The fresh sign hashes allow cancellation of errors when averaging the (high-variance) local estimators, which leads to an error bound that depends on the heterogeneity vector rather than the global frequency vector. The shared bucket hashes allow the error cancellation to happen in the same bucket, avoiding interface from other buckets, which leads to an error bound that depends on only the tail of the heterogeneity vector rather than the entire vector.
---
**Q3**. “Is it possible to provide any lower bound to show the instance-dependent upper bounds are tight?”
**A3**. Minimax lower bound exists in the literature that justifies the sharpness of the upper bound in the worst case, see, e.g., Theorem 8.1 in [13]. However, whether or not there is an instance-dependent lower bound that matches the best-known instance-dependent upper bound is still an open problem to the best of our knowledge, and we will leave that for future work. Our conjecture is that the current instance-dependent upper bound (e.g., Proposition 2.1) can be improved.
---
**Q4**. “Line 316, when $n$ is fixed, why increasing $M$ improves the estimation error? In Theorem 4.1, the estimation error $ \sqrt{ \frac{\sum_{i>M} (F_i^*)^2} {M} } $ seems to have nothing to do with $M$ in the worst case.”
**A4**. Thank you for pointing out this typo. We will revise line 316 to “... a larger number of rounds M improves the estimation error **in non-worst cases, e.g., when the local frequency vectors are nearly homogenous**...”
---
**Q5**. “Is it possible to extend the methods developed in this paper to empirical risk minimization where gradients of nonconvex models (such as neural networks) are used?”
**A5**. Good question. Our focus in this work is a federated frequency estimation problem. However, our analysis for Count Sketch and Hybrid Sketch can be applied to general vector recovery problems as well. Therefore, we expect our methods can be applied to estimate the gradient vectors in the ERM problem. We will comment on this in the revision.
---
**Q6**. “There seems not much novelty in the theoretical analysis.”
**A6**. We use some standard techniques to derive our bounds. However, we’d like to emphasize that both the considered multi-round FFE problem settings and results are new. It also requires non-trivial efforts to establish new effective algorithms such as Hybrid Sketch, to allow the sketching method to adapt to the hardness of the instance, and to make the algorithm differentially private.
---
**Q7**. “The benefit of instance dependence on the estimation guarantee is not well illustrated.”
**A7**. We hope our explanations about Figure 1 in **A1** have clarified the benefits of instance-dependent methods over the minimax optimal method. We are happy to improve our presentation further if you have specific suggestions!
---
**Q8**. Though the author provides a lot of numerical experiments to validate their theoretical predictions, it is still unclear how the proposed method would affect practical training where (overparameterized) neural networks are used.
**A8**. We’d like to emphasize that our focus in this work is the federated frequency estimation problem and neural network training is beyond the scope of this work. We agree that extending our methods to neural network training is an interesting future direction. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
EgoEnv: Human-centric environment representations from egocentric video | Accept (oral) | Summary: This work aims to learn human centric environment representations from first person camera views. Their novel approach utilizes a transformer-based approach that encodes the local environment state at each time-step in an egocentric video is defined as a set of objects, along with their approximate distances, located in front, left, right, and behind the camera-wearer. This work claims to outperform state-of-the-art representations in predicting visited rooms and retrieving significant moments when responding to natural language queries.
Strengths: `Significance:` This work takes an important step towards proposing an approach that models the physical surroundings of a camera from a single egocentric perspective. This work could have multiple applications in AR, VR and robot navigation.
`Originality:` Even though there has been a lot of work in the domain of video understanding in 3D environments, most approaches localize the camera-wearer but do not learn representations for the camera-wearer’s surroundings. Therefore, to the best of my knowledge, this work is novel.
`Quality:` The authors provide a good theoretical background to their approach and back up their claims with rigorous experimentation on multiple datasets and tasks.
`Clarity:` The paper and the supplementary materials are clearly written and easy to follow.
Weaknesses: - This work does not consider the influence of motion blur if the person wearing the camera makes sudden movements, making it challenging to apply this work to real environments.
- This work also uses a semantic segmentation model to identify physical objects in the camera wearers’s environment. However, that could be limiting as their approach is restricted to objects that are classified by the segmentation model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The authors should elaborate on how their models would deal with motion blur and sudden movements of the camera. Is there a motion model that is implicitly learned based on the camera movements or is it assumed that the camera moves at a fixed speed?
- Since this method is intended for AR applications, were their any evaluations done to benchmark the real-time inference performance of this method?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not explicitly state the limitations of this work, and should refer to the Weaknesses section of the review and consider stating those limitations in the paper (if applicable).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1. This work does not consider the influence of motion blur if the person wearing the camera makes sudden movements, making it challenging to apply this work to real environments.
> Q1. The authors should elaborate on how their models would deal with motion blur and sudden movements of the camera. Is there a motion model that is implicitly learned based on the camera movements or is it assumed that the camera moves at a fixed speed?
As mentioned in L151, the pose embedding model was intentionally designed to account for missing or unreliable pose estimates in real-world video due to rapid camera-motion and motion blur, however, there is no explicit mechanism in place to filter them out. In terms of inputs to the model, models are trained assuming a fixed speed (i.e., a fixed FPS for the simulated agent trajectories). One of our testbeds, Ego4D, exhibits frequent motion blur; please see the Supp video for examples.
---
> W2. This work also uses a semantic segmentation model to identify physical objects in the camera wearers’s environment. However, that could be limiting as their approach is restricted to objects that are classified by the segmentation model.
Yes, this is an important point. As mentioned in L231, we are limited to the 23 categories present in HM3D, detected by semantic segmentation models; however, these categories capture a broad range of commonly available household objects (TV sets, couches, tools etc.) in our video datasets (Ego4D, HouseTours). Importantly, the object classes are *not* a required output of the model – they are used only as labels in the local state task to train the model to link views across a trajectory. Once trained, the prediction heads are discarded while the encoder is retained to generate the environment feature $h_q$ (L192). In short, a larger set of objects is desirable for training, but does not restrict our approach for generating downstream environment features.
---
> Q2. Since this method is intended for AR applications, were their any evaluations done to benchmark the real-time inference performance of this method?
No, real-time inference is an important feature but was not a research focus in this work.
---
> L1. The authors do not explicitly state the limitations of this work, and should refer to the Weaknesses section of the review and consider stating those limitations in the paper (if applicable).
Please see the discussion on limitations in the common response.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal, and am satisfied with the answers to my concerns. Therefore, I am increasing my rating from Weak Accept to Accept. | Summary: In this paper, the authors address the limitation of current video understanding methods that only analyze short video clips in isolation, without considering the broader context of the camera-wearer's environment. They propose an approach that establishes a connection between egocentric videos and the surrounding environment by learning predictive representations. To accomplish this, the authors train their models using videos captured by agents in simulated 3D environments, where the environment is fully observable. An interesting finding is that despite being exclusively trained on simulated videos, the proposed approach effectively handles real-world videos from HouseTours and Ego4D datasets. It also achieves state-of-the-art results in the Ego4D NLQ challenge.
I have increased the recommendation after the rebuttal. The rebuttal addressed my concerns.
Strengths: Overall, this paper introduces an innovative approach that bridges the gap between egocentric video and the camera-wearer's environment. By leveraging predictive representations, trained on simulated videos, the proposed method demonstrates improved performance over traditional clip-based approaches in various human-centric video tasks and real-world scenarios. The general idea of grounding egocentric video in its underlying world environment is very interesting. The proposed method to learn representations that are predictive of their surroundings and then enhance standard clip-based models is technically sound.
Weaknesses: In my opinion, the core of this method is a pertaining process that refines a feature into a better one by implicitly seeing its surroundings.
In this sense, in all the experiments, EgoEnv features have knowledge about the surroundings however other features do not. Thus, one important experiment needed to be done is allowing other models to access the same amount of information but in a more straightforward manner.
For example, giving other models not only the current clip as input, but also frames that are uniformly sampled from the whole video.
From the Tables, FrameFeat and ObjectFeat baselines are already very useful. I wonder whether a simple extension of these methods that, similarly, allows a longer temporal range as input at a time, would also perform reasonably well.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The experiments stated in the weakness section.
2. Per the NeurIPS requirement, I hope the authors can discuss the limitations of the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors did not state the limitations of this work in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1/Q1. In my opinion, the core of this method is a pertaining process that refines a feature into a better one by implicitly seeing its surroundings. In this sense, in all the experiments, EgoEnv features have knowledge about the surroundings however other features do not. Thus, one important experiment needed to be done is allowing other models to access the same amount of information but in a more straightforward manner.
Please note that we do provide such baselines. As mentioned in L294, all models have access to the full video, while inference is performed at a particular time-step. Several baselines do already use the entire video context. TRF (scratch) and EPC both use the same inputs as our model (uniformly sampled across the video) and Ego-Topo (which builds a graph over the entire video’s frames). TRF (scratch) is in fact a combination of FrameFeat + the transformer to allow longer temporal range.
---
> Q2. Per the NeurIPS requirement, I hope the authors can discuss the limitations of the paper.
> The authors did not state the limitations of this work in the paper.
Please see the discussion on limitations in the common response. | Summary: The paper proposes a novel framework to learn environment-aware video representations from egocentric videos. The framework can be trained on synthetic data and incorporated into various existing approaches for real-world downstream tasks including RoomPred and NLQ. Experiments demonstrate that models equipped with the framework achieve superior performances in the two downstream tasks on two real-world egocentric datasets, which demonstrates the value of synthetic data for real-world 3D understanding under egocentric videos.
Strengths: (1) The method in the paper releases the need for camera poses, making it more robust to the noise in structure-from-motion algorithms compared to existing pose-based methods.
(2) Experiments fully demonstrate the effectiveness of the method.
(3) Comprehensive ablation studies demonstrate the value of the core designs of the method.
(4) The method is trained on only automatically-generated synthetic data yet performs better than baselines on real-world scenarios, indicating that simulated data could directly benefit real-world egocentric video understanding without techniques of sim-to-real transfer.
(5) The writing is clear and well-organized.
Weaknesses: (1) The method hugely relies on the surrounding background objects in the 3D scene. When a person navigates in a scene with clean backgrounds, it might be challenging for the method to encode expressive environment features. When dealing with easy real-world instances in the RoomPred task, the method performance is slightly worse than the baselines.
(2) The performance gain from the proposed method seems marginal on Ego4D.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: When incorporating the ground-truth camera poses into the method, I am curious about how to input the pose to EgoEnv+pose since I do not find a network parameter that transfers camera poses to pose embeddings.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: One potential limitation is that the method has not been tested on dynamic scenes with dynamic background objects or humans such as multi-person interaction scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1 The method hugely relies on the surrounding background objects in the 3D scene. When a person navigates in a scene with clean backgrounds, it might be challenging for the method to encode expressive environment features. When dealing with easy real-world instances in the RoomPred task, the method performance is slightly worse than the baselines.
Yes, as mentioned in L187 our method builds on learned priors of objects and their layouts and will be equivalent to standard video models in, for example, an empty room. However, empty scenes may not be aligned with the practical relevance of our models (i.e., assistive robots / AR mentioned in L58-62) where objects and cluttered scenes are key.
---
> W2 The performance gain from the proposed method seems marginal on Ego4D.
We present a detailed analysis of our approach’s performance on Ego4D in Supp. B. To summarize, Ego4D videos are in-the-wild videos of natural human activity in diverse scenes. This is in contrast to the simulated walkthroughs used for pretraining. We show that our approach performs well on samples that are aligned with the pretraining data – i.e., indoor home scenarios and navigation heavy scenarios, with lower improvements on out-of-distribution scenes like outdoor activities (e.g., golfing, outdoor cooking). Importantly, while the performance improvement is indeed lower than improvements on HouseTours, our method was the top-ranked approach on Ego4D for NLQ at the time of submission, and remains third ranked on the NLQ leaderboard, demonstrating its impact.
---
> Q1 When incorporating the ground-truth camera poses into the method, I am curious about how to input the pose to EgoEnv+pose since I do not find a network parameter that transfers camera poses to pose embeddings.
The input poses are directly transformed to the dimension of the pose embeddings ($p_t$ in Fig 3) using a linear layer.
---
> L1. One potential limitation is that the method has not been tested on dynamic scenes with dynamic background objects or humans such as multi-person interaction scenarios.
For testing, Ego4D contains videos with dynamic objects, object interactions, and social interactions. For training, yes, these are interesting future directions that will be driven by advances in simulator capabilities.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author's response. My concerns have been fully addressed. I believe the work is a solid contribution to the community and would like to maintain my initial rating. | Summary: In the paper, a model is introduced to extract vector embeddings of environments using images from a first-person perspective. The model is trained in a simulated environment where a virtual agent moves around and collects images to learn about its surroundings. The model's performance was tested in two different tasks - predicting the layout of a room and remembering sequences of events. The experimental results showed that the proposed model outperforms other baselines in both downstream tasks.
Strengths: - The paper is well-structured and easy to follow, with a clear motivation behind it. The tables and figures are helpful in explaining the concepts. Overall, I found the paper to be an enjoyable read.
- The paper tackles the very interesting and relevant problem of scene understanding from first-person images.
- The proposed model is very interesting and the experiments are enlightening and helpful in utilizing environment information of egocentric images to better understand the scene.
- I believe this is a solid paper that deserves to be communicated.
Weaknesses: Although the paper is well-written and structured, many important experiments and discussions were excluded and can only be found in the 18-page supplemental material. One example is the full ablation procedure which is solely available in the supplemental material.
I couldn't find any information on the limitations of the agent, which is crucial in this work. For instance, how well the agent performs in outdoor environments? Moreover, it's important to know and if there are any differences in walking patterns between the virtual agent and real people that may affect the results. Additionally, it would be helpful to know how effective the vector embedding is in dynamic scenes where the environment is constantly changing during video acquisition.
I believe that it would be beneficial to briefly discuss these questions that have been raised.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1) How was the algorithm the generate the walking pattern of the virtual agent? Were used different speeds or motions (e.g., running, walking, etc.) during the training?
2) Based on the images presented in the paper, it appears that the agent did not look up or down. I am curious if this lack of movement would impact its performance in videos focused on activities such as cooking, where the camera angle is often directed downwards. Has there been any analysis on this topic?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: I was unable to locate the discussion on limitations either in the paper or the lengthy supplemental material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1. Although the paper is well-written and structured, many important experiments and discussions were excluded and can only be found in the 18-page supplemental material. One example is the full ablation procedure which is solely available in the supplemental material.
We tried to prioritize the most important information in the main paper, but of course, some experiments had to be moved to Supp. If our paper is accepted, we will move relevant sections to the main paper. For example, the discussion on the limitations of our approach and the ablation experiments mentioned. In the main paper, we point to specific experiments in the supplementary so that a reader knows what is available.
---
> W2. I couldn't find any information on the limitations of the agent, which is crucial in this work.
Please see the common response for a discussion on the limitations. Below are responses to specific questions.
> “how well the agent performs in outdoor environments?”
As mentioned in L220-3 and Supp. B, our approach is best suited to indoor environments that are aligned with training scenes. Supp. Table 1 quantifies this for the NLQ task on Ego4D, where we see the largest improvements in indoor home scenarios (e.g., listening to music, household management) and navigation heavy scenarios (walking indoors and outdoors) and lower improvements in outdoor scenes (e.g., golfing, outdoor cooking). Note that our approach is compatible with outdoor video; however, it is limited by the availability of simulated data for activities in outdoor scenes.
> Additionally, it would be helpful to know how effective the vector embedding is in dynamic scenes where the environment is constantly changing during video acquisition.
Interesting point. Our walkthroughs are generated in static scenes where objects are not moved, while the real world is dynamic. We argue in Supp. C1 that a large part of real-world environments are static (e.g., counter tops, staircases; beds, couches, TV sets) which is valuable to encode even when some objects may have moved around. Our embeddings do perform better where there is primarily camera motion and low scene motion / object interaction – Supp. Table 1 shows increased performance in navigation-heavy videos (e.g., walking indoors and outdoors). Further, the larger improvements across both tasks in HouseTours, where the environment is also static, as compared to Ego4D, also hints at this effect.
---
> Q1-2. Moreover, it's important to know and if there are any differences in walking patterns between the virtual agent and real people that may affect the results.
> How was the algorithm the generate the walking pattern of the virtual agent? Were used different speeds or motions (e.g., running, walking, etc.) during the training? Based on the images presented in the paper, it appears that the agent did not look up or down. I am curious if this lack of movement would impact its performance in videos focused on activities such as cooking, where the camera angle is often directed downwards. Has there been any analysis on this topic?
Details about the simulated agent walkthroughs are in Supp. C1. To summarize, agents follow the shortest-path between two points for a fixed number of steps. The action space is discrete (move forward by 0.25m, turn left/right). The frames were rendered into videos using a fixed FPS (i.e., equivalent to a single speed of walking). The episode length and FPS were selected to approximately match the characteristics of human walkthroughs in HouseTours (i.e., agents move between approximately 20 rooms on average in an episode). We did not experiment with agents looking up/down or in general more “human-like” head motion or navigation policies, though that is an interesting direction for future research.
---
Rebuttal Comment 1.1:
Comment: After reading the rebuttal, I remain confident in my original assessment and will maintain my initial score. I believe this is a solid paper that deserves to be communicated. | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for their effort and constructive feedback. All five reviewers recommend accepting the paper, with two recommending strong accept (5, 6, 7, 8, 8). We address common concerns shared by reviewers below.
**Limitations of the proposed approach**
We will emphasize the limitations more in the paper and summarize here.
We discuss the limitations of our approach in the context of the sim-to-real gap in L328 and Supp. B. Specifically, our model is affected by the type and diversity of pretraining data — videos of simulated agents walking around a house — limiting its generalization to unconstrained real-world video. Similarly, our approach is limited by simulator functionality – HM3D scenes support a small set of objects, which may not overlap with real-world environments, and Habitat does not support fine-grained object-interactions (e.g., chopping vegetables). As a result, we find that our approach works well on videos that are consistent with pretraining (i.e., indoor home scenarios; videos with lots of walking and less object interaction) but contributes less on out-of-distribution scenes and activities (e.g., golfing, outdoor cooking). We expect that future advancements in simulator capabilities (e.g., human motion models for agents, fine grained object interaction simulation) will help address this class of limitations.
Beyond the sim-to-real gap, our approach has other limitations. First, our model does not have specialized modules to aggregate long-term temporal (or pose) information into its representation, compared to, for example, structure from motion methods that can aggregate and re-localize observations over a long video. As a result, our approach does not see benefits from increasing the temporal window from which the memory is constructed (Supp. F3). Next, our local state prediction task is learned in a coarse 2D space – the top-down map of the environment – which does not encode fine-grained geometric relations which may be important for certain tasks (e.g., is the object placed on top of, or inside another?). Finally, our approach is computationally intensive. While pre-training is a one-time cost, generating each augmented clip feature at inference requires the computation of several frame features in the vicinity of the clip, and then aggregating that information using the transformer module. This may be limiting for real-time inference applications in the assistance setting.
Despite these limitations, our approach outperforms state-of-the-art representations for predicting visited rooms and retrieving important moments from natural language queries. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The work presents an approach to learn spatial environment representations for egocentric videos. Such representations encode the camera-wearer’s (seen and unseen) local surroundings/environment. Previous approaches mostly focus on learning representations over a longer temporal space, however, understanding of the physical spatial space seem to be missing which is addressed in the work via learning of environment-aware features.
The method define a local environment state having a set of object in a relative direction to the camera wearer. The local state has both geometric information (relative object location) and semantic information (object labels). The model learns this local state by learning two matrices - a direction matrix which represents which object is in front, back, left or right and another metric which represents the distance of the object from the camera wearer in a discretized space. To define the matrices, pose information is needed. Since ground truth pose information is missing from real-world egocentric videos, a model first learns the pose information from simulated environment by minimizing a cross entropy loss between the pose of camera wearer and pose of object.
Next, an environment memory is encoded from a video walkthrough of T frames and a query frame. Pose embeddings are generated for all the T frames and query frame using the learned pose model, and each frame is encoded with this pose embedding. Then, K video frames are sampled to construct an environment memory using a transformer encoder. This representation consists of both temporal and spatial information of the environment. The transformer decoder then uses this environment memory and the query frame to generate a EgoEnv representation which is combined with the original video features (eg. Resnet). These representations are finally used for the video downstream tasks.
The method is evaluated on the room prediction and NLQ challenge on the datasets - Ego4D, HouseTours, and Matterport3D.
Strengths: 1. The paper is well-written and has a detailed supplementary material covering the dataset statistics, model architecture details, and ablation experiments.
2. The work shows a thorough evaluation with the method being evaluated on multiple datasets and compared with multiple baselines.
Weaknesses: 1. It would be great to discuss about the accuracy of the pose embedding learning network on the simulated network since the overall model is dependent on it.
2. There can be more discussion as to how the sim-to-real gap gets reduced in the approach.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: How much time does it take to train the pose embedding network and the rest of the model too in general?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: No such limitations. The paper is well-written, covers all the implementation details, and discusses about all the aspects of the approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1 It would be great to discuss about the accuracy of the pose embedding learning network on the simulated network since the overall model is dependent on it.
Thanks for the suggestion. As mentioned in L157, the pose embedding network is trained to predict relative pose discretized into 12 angles and 4 distance ranges. On the validation set, the model achieves accuracies of 48.4% on relative distance prediction and 34.4% on relative orientation prediction. Note that this task is challenging – models must predict relative pose for all possible pairs of observations in a trajectory using their visual features alone – however the goal is to generate pose encodings, not to output perfect pose. In Supp. E.2.1, we highlight the importance of these pose embeddings for local state prediction. In short, including pose embeddings leads to better object class and distance prediction, especially when objects are not immediately visible.
---
> W2 There can be more discussion as to how the sim-to-real gap gets reduced in the approach.
We make efforts to minimize the sim-to-real gap in our dataset and task design. First, we opt for photo-realistic environments from HM3D which contain high-fidelity reconstructions of diverse, real-world houses. Second, when creating the simulated trajectories in Sec. 4 (Simulator environments), we try to match the characteristics of simulated agent walkthroughs to camera-wearer movement in video datasets (L233). Specifically, we position cameras at head-level for the simulated agents, and we adjust the episode duration such that the number of rooms visited per episode by simulated agents roughly matches that in HouseTours (approximately 20 room transitions on average). Please see our Supp. video to compare the generated walkthroughs with HouseTours videos. Finally, we design our local state prediction task around capturing object and environment layout that reflects priors of real-world object distributions. We select this over other simpler alternatives like predicting image features directly, which are more susceptible to failures due to sim-to-real visual differences. We compare alternatives in Supp. E.4.
We discuss the effect of the sim-to-real gap in Supp B. To summarize, we find that our approach works best on scenes that are aligned with our simulated training data (i.e., indoor home scenarios; videos with lots of walking and less object interaction) and naturally performs worse on out-of-distribution activities (e.g., golfing, outdoor cooking).
---
> Q1. How much time does it take to train the pose embedding network and the rest of the model too in general?
Both the local state prediction and the pose embedding training is performed for 2.5k epochs (L262). Each training run takes ~24 hours when trained on two Quadro RTX 6000 GPUs. Note that this is a one time pre-training cost – once trained, the model is directly used as a feature extractor in downstream tasks and does not incur any extra training cost.
---
Rebuttal Comment 1.1:
Comment: I have read the author’s rebuttal and my concerns have been fully addressed. This work is a good contribution towards egocentric research and thus, I will maintain my initial rating. | null | null | null | null | null | null |
Matrix Compression via Randomized Low Rank and Low Precision Factorization | Accept (poster) | Summary: This work studies the problem of computing a low rank approximation when the low rank factors are under a bit budget constraint, that is, we must output factors L and R with bounded bits such that LR approximates a given input matrix A in the Frobenius norm. The authors show that by incorporating sketching into the quantization procedure, one can get improved bounds, due to the fact that a Gaussian sketch can “flatten” the entries of a vector, which is advantageous when rounding (Appendix D). Empirical results show that this algorithm indeed gives improved results over other naive implementations such as directly rounding SVD factors.
Strengths: The problem of efficiently quantizing low rank approximations is an extremely important problem given that both low rank approximations and low precision is gaining popularity for compressing massive neural networks (https://papers.nips.cc/paper/2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf, https://arxiv.org/abs/2302.03764). This work offers an interesting new method which takes advantage of the “flattening” property of Gaussian sketches in order to obtain improved results for quantization in the context of low rank approximation. This idea is conceptually simple yet interesting. Empirical results are also convincing.
Weaknesses: The contribution is already quite nice, but there are several followup investigations that could strengthen this work much more:
* Are there any lower bounds on the trade-off between the approximation accuracy and the bit budget?
* There are structured sketching transforms such as the Subsampled Randomized Hadamard Transform (see Theorem 2.4 of http://www.cs.cmu.edu/afs/cs/user/dwoodruf/www/wNow3.pdf) which also have the “equalizing” or “flattening” type of behavior, yet can be applied in much faster time, and furthermore also save on the storage of the sketching matrix since it is an integer matrix. Can this be used to get faster implementations in theory/practice?
* Can you report the running time of the experiments?
* Can you comment on whether the bit bounds imply actual savings in the memory usage, or are the practical implementations of bit complexity too crude to capture these improvements?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Should put A \approx LR in the abstract
* Some of the references in the Related Works section for Low-rank approximation seem inappropriate for the context. Candes-Tao and Recht et al are not focused on faster algorithms for SVD, but rather a related but different problem of matrix completion. Gradient descent on the low rank factors is indeed one way to solve approximate SVD, which I agree Zhang et al captures, but there may be more appropriate references such as https://arxiv.org/abs/2106.14289 and references within.
* Line 58: you should first establish that n >= d, or else the SVD time should be O(min{nd^2, dn^2})
* Typo (line 77): should be “instead of quantizing x directly”
* Is the randomized rounding in Section 2.1 necessary? Can you get better results with naive rounding?
* Line 177: what do you mean by “saturated” and “unsaturated”? Please define this term, it is unfamiliar to non-experts.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Discussion of limitations is limited. This work is mainly theoretical, and there are no potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for reading our paper and writing the review. We hold in high regard the voluntary nature of the review process, and in what follows, we engage with your concerns.
> Structured sketching
We agree that SRHT sketch also have an equalizing effect, and can indeed lead to faster algorithms to compress a matrix than computing a Gaussian sketch, as Hadamard transform can be computing recursively. We experimented with SRHT but decided not to include the results in our paper for two reasons:
1. SRHT sketch did not perform better than LPLR with a Gaussian sketch in terms of Frobenius norm error.
2. We have done our theoretical analysis with Gaussian sketch.
More importantly, computing the sketch $AS$ is not the most time-consuming step. Solving the minimization problem $argmin_W ||ASW - A||_2^2$, either in closed form or with conjugate gradient descent, is the slowest step. Furthermore, once the low-rank factors $L$ and $R$ are found, only these factors are stored, and not the sketching matrix. We will add a discussion relation to this in App. K as well.
> Lower bounds on the trade-off
This is a very interesting question. It might be possible to derive lower bounds on this tradeoff using covering and packing arguments for the space of all rank-$k$ approximations of a matrix. Such lower bounds are non-trivial and we will mention this in App. K.
> Do bit bounds imply actual savings?
Practical applications of quantization for hardware storage often necessitate working within specific bit budgets, such as 16-bit or 32-bit precision. With this in mind, when implementing our algorithm LPLR in practical contexts, we set values for variables $B$ and $B'$ to align with these hardware precision limitations. Subsequently, the parameter sketch size $m$ becomes an adjustable parameter, allowing us to regulate the compression of the total parameter count, denoted as $(n + d)m$. Given that the total parameters in matrix $A$ amount to $nd$, we show that for sufficiently low-rank matrices, it is feasible to choose $m$ in a manner such that $Bnm + B'md = B_{nq}nd$, where $B_{nq}$ can be as minimal as $1$ bit. This implies that $B_{nq}$ can take on values that are unattainable owing to the inherent hardware limitations.
Moreover, in other scenarios where implementations of bit complexity need not be too crude, the quantization codebook can be customized beyond the confines of non-traditional hardware limitations, we are not restricted solely to the choices of $B$ and $B'$. For instance, when transmitting a matrix over a communication channel, we can employ modulation techniques like amplitude or frequency shift keying. This enables us to use $B = B' = 2$, effectively achieving $4$ quantization levels ($2^2$). Considering that a greater number of amplitude or frequency levels necessitate larger channel resources, the bit bounds derived in our paper reveal the potential for conserving these resources in such situations.
> Randomized vs. naive rounding
Randomized rounding, also known as uniformly dithered quantizer, is preferred due to its ability to produce an unbiased estimate of the input. Dithered quantizers offer an advantage by introducing a non-zero probability of quantizing an input to either its ceiling or floor, resulting in reduced variance when averaging multiple independent realizations. Moreover, the unbiased nature of the quantizer output simplifies the analysis. In our experiments, we did not observe any benefits of using deterministic rounding instead of dithered rounding.
> Saturated and unsaturated quantizer
The dynamic range of a scalar quantizer is defined to the the interval $[-{\rm R}, +{\rm R}]$ as defined in Sec. 2.1. If the input $x$ to the quantizer falls outside this range, i.e., $x > {\rm R}$ or $x < -{\rm R}$, the quantizer is said to be {\it saturated}. We want $\rm R$ to be sufficiently large so that $-{\rm R} \leq x \leq {\rm R}$ is satisfied, so that the quantizer is {\it unsaturated} and the quantized output is an unbiased estimate of the input with bounded variance. We will define this in the main paper.
> Runtime of the experiments
Please refer to the global response (specifically Table 2 and Fig. 3). In Table 2, we compare the wall-clock CPU time for computing the low-rank quantized factors of CIFAR-10 and CIFAR-100 datasets via LPLR, LSVD and DSVD. Each image embedding forms a row of the input matrix, with low rank factors computed for *each* class of the input dataset. It is evident that LPLR significantly outperforms DSVD and LSVD in terms of speed.
Fig. 3 complements the tabulated data by visualizing the runtime involved in two scenarios. The first scenario involves computing the matrix-vector multiplication $\bf Ax$ directly. The second scenario involves approximating this multiplication using a low-rank factorization, specifically $\bf L(Rx)$, where matrices $\bf L$ and $\bf R$ are tall and wide matrices, respectively. This low-rank factorization strategy reduces the overall number of multiplications from $nd$ to $(n+d)m$.
> Misc typos
Thank you for pointing them out. We will proofread and rectify them.
---
Rebuttal Comment 1.1:
Title: Thank you for the responses!
Comment: The authors' responses are very helpful in understanding this work. As indicated in my original review, I regard this work highly, and the authors' responses help to increase my confidence in this assessment.
I encourage the authors to mention the comment about the most time-consuming step in this procedure in the main text, I find this valuable. I would also appreciate if you mention that naive rounding does not make a difference in practice, I imagine this is much more convenient for practitioners.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Dear Reviewer,
Thank you once again for reading our work, and for providing a thoughtful and constructive feedback. We are happy that our responses have helped clarify your concerns. We will definitely include a mention of the most time-consuming step in the main text, along with emphasizing that naive rounding does not make a difference in practice.
If you have any further comments or questions, please do not hesitate to reach out. | Summary: This paper introduces a novel low-rank matrix factorization algorithm that is using sketching matrix idea and quantization, such as they do:
1. Use Gaussian RV to generate sketch of the matrix and compute the approximate basis
2. Use Quantization with Q - to get Q(AS)
3. Use Q(AS) and Q' to get Q'(W)
4. Return Q(AS) and Q'(W)
Authors provide theoretical and numerical analysis of their idea.
Strengths: - introduction section written very well & and very informative and to the point, such as introducing LPLR algorithm briefly, talking about Low-rank approximation and Randomized quantization.
- the introduced algorithm clearly communicated and results are theoretically sound.
Weaknesses: - Abstract seem to be a bit wordy - it would be nice if there were formulations and numbers that grabs readers attention with resutls.
- i like the way motivating the work - with memory constraints - but i am curious is there any application in real life for low rank decomposition that actually saves memory. some examples would be good
- the paper doesn't exactly introduce a new direction - rather seems to be using existing ideas and put them together.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Table 1 & 2 is a bit puzzling me - it seems that Approximataion Error of Naive Uniform is smaller than LPLR and with cheaper computation cost. Do you think that you can clarify in the table itself - why LPLR is better ? because it seems to be confusing in the first glance.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The are many sketching / matric factorization algorithms, i am puzzled why we only presented Naive Uniform and Direct SVD algorithms as comparison points.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We are grateful for the valuable time you spent in reading our paper and writing the review. We value the fact that the review process is a voluntary endeavor, and in the sections below, we tackle each of your concerns to address and clarify your questions.
> Table 1 & 2 is a bit puzzling me
Tables $1$ and $2$ present the approximation error guarantees for three different compression methods: $\rm (i)$ naive quantization with an error of $\epsilon$, $\rm (ii)$ direct SVD with an error of $\lVert \mathbf{A}_k - \mathbf{A} \rVert_F^2 + \epsilon$, and $\rm (iii)$ LPLR with an error of $(1 + \delta)\lVert \mathbf{A}_k - \mathbf{A} \rVert_F^2 + \epsilon$. At first glance, it might appear that naive quantization is always superior to LPLR since the latter contains additional terms in the approximation error. However, it's crucial to consider that naive quantization also demands more bits than LPLR.
While we do not claim that LPLR will consistently outperform naive quantization, LPLR exhibits better performance when dealing with matrices that are inherently low-rank to begin with. In the case of low-rank matrices $\mathbf{A}$, the first term, $(1 + \delta)\lVert \mathbf{A}_k - \mathbf{A} \rVert_F^2$, representing the error of the best rank-$k$ approximation, is very small (and becomes zero if $\mathbf{A}$ is precisely rank $k$). In such scenarios, both naive quantization and LPLR yield an approximation error of $\epsilon$. However, LPLR achieves this error with fewer bits compared to naive quantization. On the other hand, for matrices that are close to full rank, i.e., with singular values decaying slowly, the term $\lVert \mathbf{A}_k - \mathbf{A} \rVert_F^2$ will not be negligible. In other words, given the same bit-budget, LPLR can achieve a lower level of error compared to naive quantization. We will highlight this in the caption of the tables.
> Why only presented Naive Uniform and Direct SVD algorithms as comparison points.
In this study, we have focused on matrix factorizations that are well-suited for achieving low-rank approximations. To obtain the best rank-$k$ approximation for a given matrix $\mathbf{A}$, we first compute the Singular Value Decomposition (SVD) and then retain the top-$k$ singular values along with their corresponding singular vectors. When considering both low-rank and low-precision requirements, SVD + quantization emerges as a natural benchmark. On the other hand, for matrix quantization, the standard practice is to use naive uniform quantization due to its simplicity. Although other matrix factorizations like $\mathbf{QR}$ or $\mathbf{LU}$ decompositions also provide low-rank approximations, they are suboptimal compared to the SVD-based low-rank approximation method (which is optimal). Hence, they are not expected to outperform direct-SVD when combined with a naive application of uniform quantization. Moreover, concerning matrix compression, it is essential to achieve parity in terms of bit-requirements among various benchmarks for a fair comparison. To accomplish this, direct-SVD, LPLR, and LSVD offer tunable parameters, which can be adjusted. These parameters include the target rank $k$ and the sketch size $m$, which allow us to control the compression ratio and maintain the desired parity.
> Application in real life for low rank decomposition that actually saves memory
Low rank decomposition has found a contemporary use in conserving memory, particularly in compressing neural networks. Besides the sources cited in the initial passage, additional references include:
1. Y. Idelbayev and M. Á. Carreira-Perpiñán, "Low-Rank Compression of Neural Nets: Learning the Rank of Each Layer," CVPR, 2020.
2. T. N. Sainath, B. Kingsbury, V. Sindhwani, E. Arisoy and B. Ramabhadran, "Low-rank matrix factorization for Deep Neural Network training with high-dimensional output targets," ICASSP 2013.
The importance of compressing neural networks stems from several factors, including the need to decrease memory usage, enhance inference speed, and facilitate deployment on devices with limited resources. Low rank decompositions play a pivotal role in achieving these goals by breaking down the initial weight matrices into smaller, organized matrices with fewer parameters.
> The paper doesn't exactly introduce a new direction
We propose an idea which is simple in its execution and pragmatic in its implementation. Our idea is well supported for modern linear algebra libraries, while offering theoretical analysis to effectively identify the regime in which it offers value over conventional, standard methods. Moreover, our method offers a tunable knob to achieve effective bit-rates in software, not currently supported by hardware. For example, in Table $3$, using LPLR with appropriate parameters bits is equivalent to operating at $1$ or $2$-bits per pixel, despite hardware working with a higher bit precision. Moreover, our method is adaptable to future hardware advancements, such as the emergence of $4$-bit GPUs, which can significantly speed up our technique's primitive operations. We believe that the simplicity of our algorithm adds to its appeal and makes it a valuable contribution to the literature, bridging the fields of matrix compression, quantization, and sketching while ensuring compatibility with upcoming hardware developments.
> Abstract seem to be a bit wordy
We appreciate your input on improving the abstract's appeal to readers. We concur with the idea of emphasizing the main outcomes derived from Tables $1$ and $2$, which involve contrasting our novel algorithm with other benchmarks. This contrast could be established through aspects like computational complexity (e.g., highlighting the ${\rm O}(ndm)$ of LPLR in contrast to the ${\rm O}(nd^2)$ of direct SVD quant.). Moreover, we intend to incorporate specific numerical results obtained from our simulation efforts. | Summary: The authors investigate combining low-rank matrix factorization and (uniform scalar) quantization.
Through theoretical analysis and experiments they demonstrate that this can yield much higher accuracy than directly quantizing the input matrix. One natural choice is to compute the SVD of the matrix and quantize the two factors independently. It's shown that quantizing the left factor first and then computing a new right factor that best approximates the input when multiplied with the quantized left factor yields much better result. The choice of SVD is not critical and could be replaced by randomly mixing the columns of the input matrix, i.e. random sketching. In fact sketching has provable benefits as the entries of sketched matrices are bounded with very high probability. This bounded range improves quantization theoretically. The authors prove several theorems (their proofs are in the extensive appendix) and conduct detailed empirical evaluation.
Strengths: 1) All the key ideas of the paper (low-rank approximation, sketching) are sound.
2) Detailed theoretical analysis with rigorous proofs.
3) Extensive experiments.
4) Compression of neural network weights and embeddings via low rank approximation and quantization are popular and impactful topics both to reduce memory usage and to speed up training and inference.
Weaknesses: 1) Neither the ideas nor the analysis are particularly inventive in my opinion. It's self evident that optimizing the second factor after quantizing the first (LSVD) is superior to independent quantization (DSVD), in fact the process could be iterated further. While I appreciate the 30+ pages of proofs provided by the authors it seems as if they rely on chaining known results and techniques for sketching and random matrices combining with patient algebra.
2) Despite its seemingly weaker theoretical bounds LSVD is always one of the most accurate method in the experiments (see Tables 4-7). This is also clearly highlighted by the authors in the limitations section.
3) Quantization aware training (not considered in the paper) is highly likely to produce equivalent or better results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Line 226 and Theorem 3.2: For (1+eps') relative Frobenius error approximation LPLR requires m=O(k/eps') >> k columns, which the text seems to overlook. Could you please discuss?
2) Table 3: Could you also add LSVD to this experiment too? DSVD's Frobenius error is always the same 0.496, and LPLR's error is the same the bottom of the table, when the bit budget is doubled, as in the top of the table. Could you discuss why? What was the rank (k) of DVSD, was it the same as m of LPLR? I.e. could you make Rank column header precise?
3) Matrix entries are typically rescaled to [0,1] before quantization as it's cheap to store their min/max (or some very low/high quantiles), even for each row (or column) of a matrix. How would such rescaling change your theoretical and empirical results?
4) Line 77: Sx -> x
5) Table 1, row of LPLR: delta is undefined
6) Lines 219-222: missing log2(), 4 times
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, limitation section is adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for the time you invested in reading our paper and writing the review. We appreciate that reviewing is a voluntary effort and below, we address each of your concerns to resolve your queries.
> $1 + \epsilon'$ relative Frobenius error approximation LPLR requires $m=O(k/\epsilon') \gg k$
Indeed, increasing the sketch size $m$ leads to a reduction in the relative error concerning the best rank-$k$ approximation, denoted as $||A_k - A||_F^2$. However, it is crucial to consider that low-rank approximations of a matrix are valuable only when the matrix is inherently low-rank approximable, i.e., $||A_k - A||_F^2$ is already sufficiently small to begin with (and precisely zero if $A$ has a rank less than $k$). In such cases, selecting the sketch size $m$ to be slightly larger than $k$, such as $m = k + p$, where $p \geq 2$ is the oversampling factor, ensures that $\delta$ does not become arbitrarily large. Consequently, the additional approximation error introduced by sketching, denoted as $\delta||A_k - A||_F^2$, is of the same order of magnitude as the best rank-$k$ approximation error. This observation aligns well with practical applications where low-rank approximations of matrices are beneficial. The impact of the oversampling factor $p$ has been extensively studied in previous works, for example, see references [16] and [74]. Moreover, from a theoretical perspective, the relative error can be improved to $(1 + \delta)^{1/(2q + 1)}$ with $q$ power iterations, though this comes at the expense of increased computational complexity. In our numerical simulations, we have chosen the value of $m$ to ensure that given the values of $B$ and $B'$, the number of parameters being quantized results in a total bit requirement that achieves parity with the naive quantization benchmark.
> Matrix entries are typically rescaled to [0,1]
We have taken into theoretical account this specific scenario in Table 2, where we make the assumption that each element of the matrix adheres to $A_{ij} = O(1)$. This is achieved by appropriately adjusting the scale of the matrix entries. In our image compression experiments, we have also practically accounted for this rescaling, as we are aware that pixel values within images fall within the range of 0 to 255.
It's crucial to emphasize the contrast between the scenarios depicted in Tables 1 and 2. In Table 1, we make the assumption that the matrix's rows are normalized, denoted as $||A^{(i)}|| = O(1)$. This assumption is particularly relevant when performing nearest neighbor classification, where each row of $A$ represents an embedding vector. In this context, scaling the norm doesn't alter the cosine similarity, which is why this normalization is appropriate.
> Accuracy of LSVD
Indeed, in Tables 4 to 7, LSVD stands out as one of the accurate methods. This should not be interpreted as a drawback; instead, it serves as a valuable insight. In fact, even theoretically, LSVD does not have weaker bounds than LPLR (cf. Thm. I.1 vs. Thm. 3.2) -- there's no $(1 + \delta)$ factor present for LSVD. LSVD precisely calculates the left low-rank factor as $Q(\widetilde{U}_k)$, where $\widetilde{U}_k \in \mathbb{R}^{n \times k}$ is derived from the first $k$ columns of $U\Sigma$. This requires computing the SVD of $A$, which is computationally intensive. On the other hand, LPLR delivers comparable performance to LSVD while demanding significantly less computation. When computing the SVD is practical, LSVD is as a strong contender and deserves consideration as an option. However, in cases where matrices become exceedingly large and SVD computation becomes infeasible, LPLR offers a viable solution to overcome this limitation. We discuss this in lines 1204 to 1210 of App. K.
> Quantization aware training
We are not really sure what is meant by quantization-aware training (QAT) in the context of low-rank approximation for matrix compression. For training a neural network, QAT adds quantize/de-quantize nodes, and treats the quantization loss as part of the training loss. Subsequent fine-tuning of the parameters makes the model more resilient. For QAT in matrix compression, did you mean treating $L$ and $R$ as functions of some trainable hyper-parameters, which are subsequently optimized? If so, yes, it could possibly decrease the approximation error, at the cost of additional computation (which can potentially be prohibitive).
> Neither the ideas nor the analysis are particularly inventive in my opinion.
We believe that the simplicity of our algorithm adds to its appeal and makes it a valuable contribution to the literature, bridging the fields of matrix compression, quantization, and sketching while ensuring compatibility with upcoming hardware developments. Embracing simplicity and clarity can play a crucial role in achieving scalability and facilitating future adaptability. Additionally, such basic principles serve as foundational building blocks that can be further optimized to support more sophisticated developments.
We propose an idea which is simple in its execution and pragmatic in its implementation. Our idea is well supported for modern linear algebra libraries, while offering theoretical analysis to effectively identify the regime in which it offers value over conventional, standard methods. Moreover, our method offers a tunable knob to achieve effective bit-rates in software, not currently supported by hardware. For example, in Table 3, using LPLR with appropriate parameters bits is equivalent to operating at 1 or 2-bits per pixel, despite hardware working with a higher bit precision. Moreover, our method is adaptable to future hardware advancements, such as the emergence of 4-bit GPUs, which can significantly speed up our technique's primitive operations.
> Table 3:
We have rectified this in the last point in global response (and Table 1 of the PDF). Rank column refers to sketch size $(m)$ for LPLR and target rank $(k)$ for DSVD or LSVD.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply and explanation. I'll revise my final review upwards accordingly.
If you happen to read this comment in time:
re: "references [16] and [74]" - these are quantization papers, could you double check?
re: QAT: I meant learning factors $L$ and $R$ directly with gradient descent in the quantized bottleneck layer of the form $Q(L)\cdot Q(R)$ replacing $A$, where $Q$ denotes quantization as before.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Dear reviewer,
Thank you for your response. We are glad that you found our rebuttal convincing and are grateful for your willingness to revise your review upwards.
To answer your remaining questions:
1. Sincere apologies for the confusion -- we meant references [16] and [74] of our supplementary material, which includes references from the main paper as well as the appendix. The specific papers we pointed out to are:
P. Drineas, R. Kannan, and M. W. Mahoney. Fast monte carlo algorithms for matrices II: Computing a low-rank approximation to a matrix. SIAM Journal on Computing, 36(1):158–183, 2006.
R. Witten and E. Candès. Randomized algorithms for low-rank matrix factorizations: Sharp performance bounds. Algorithmica, 72(1):264–281, may 2015
2. QAT: Thank you for clarifying this. Yes, it is likely that QAT can decrease the approximation error further, but as we mention in our rebuttal, this comes at the cost of additional computation (which can potentially be prohibitive for compressing large data matrices). | Summary: The paper studies compression of low-rank matrices by simultaneous low-rank factorization and quantization. It proposes a method that first quantizes the randomized rangefinder as the first low-rank factor and then quantize the minimizer of reconstruction error with respect to the remaining factor as the second. Randomized rangefinder uses random Gaussian matrix which possesses the equalization property to maintain low quantization error, compared to naïve quant. Experiments are provided to demonstrate the benefits of the proposed algorithm.
Strengths: 1. provide a low rank factorization algorithm that come with quantization for further reducing memory footprint.
2. experiments demonstrate the advantage of the algorithm.
Weaknesses: 1. experimental settings should be made clearer. the current description is a bit confusing.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. what is the difference between LPLR and LPLR-SVD in experiments?
2. Accuracy seems to decrease with bits for some cases in experiments. Why?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No limitations are addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We are grateful for the time you spent in reading our paper and writing the review. We address your concerns below:
> Difference between LPLR and LPLR-SVD in experiments
LPLR refers to our main algorithm in Alg. $1$, in which the left low-rank factor is $Q(AS)$, where $S$ is the Gaussian sketching matrix. On the other hand, LPLR-SVD computes the left low-rank factor as $Q(U_kS)$, where $U_k$ is the matrix of top-$k$ singular vectors. Obtaining $U_k$ requires computing the SVD of the matrix which can be computationally prohibitive. LPLR-SVD is described in footnote $1$ on page $9$, and also in lines $111$ to $117$. LPLR-SVD is analyzed in detail in Appendix $I$.
> Accuracy seems to decrease with bits for some cases in experiments.
In cases where accuracy increases with decreasing number of bits, we believe that the quantization noise adds an inherent regularization effect. This conjecture is worth exploring in detail and we have acknowledged this in Appendix $K$ (lines $1211$ to $1215$).
> No limitations are addressed in the paper.
We have mentioned some limitations and further discussions in Appendix $K$.
> Experimental settings should be made clearer. the current description is a bit confusing.
We would be extremely grateful if you could kindly indicate any sections that require further clarity. We are more than willing to provide explanations and make any necessary edits. Furthermore, we have also done additional experiments which have been added to our global response, along with corresponding discussions. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We greatly appreciate the time you invested in reviewing our paper and sharing your concerns. As part of the global response, we have included deliberations regarding the scenarios in which LPLR demonstrates its practical utility and potential advantages over established baselines. Furthermore, we report additional experimental evaluations pertaining to image compression, along with wall-clock runtimes for LPLR and alternative baselines.
**Comparison between LPLR and baselines**: In Tables 1 and 2, we have compared LPLR with other benchmarks. In the column denoting approximation errors, aside from the common quantization error $\epsilon$ present across all rows, DSVD and LPLR introduce an additional term representing the error from low-rank approximation. Consequently, it might appear that naive quantization consistently outperforms these methods. However, it's essential to recognize that compression techniques based on low-rank factorization hold value exclusively when the matrix being compressed is inherently low-rank, meaning that $\lVert \mathbf{A}_k - \mathbf{A} \rVert_F^2$ starts off as small. If $\lVert \mathbf{A}_k - \mathbf{A} \rVert_F = 0$, LPLR can achieve identical error levels as naive quantization, while demanding fewer bits than the latter. In other words, given the same bit-budget, LPLR can achieve a lower level of error compared to naive quantization.
**Computation-constrained compression**: In situations where computational resources are limited, employing the naive strategy emerges as the most cost-effective approach for matrix quantization. Nevertheless, due to its failure to capitalize on a matrix's inherent low-rank arrangement, naive quantization may prove considerably suboptimal. As matrices increase in dimension, accommodating them in memory becomes impractical, rendering approaches like DSVD or LSVD unviable. In such scenarios, our LPLR method stands as a viable alternative, demanding slightly more computational effort than naive quantization, yet capable of harnessing the low-rank structure for improved approximation accuracy.
**Compression without any computation constraint**: If there is no scarcity of computation resources in being able to compute the SVD, it is possible to compress the matrix with all the strategies, and choose the best one.
**Benefits beyond compression**: A low-rank and low-precision factorization $\mathbf{A \approx LR}$ also enables us to approximate the matrix-vector product $\mathbf{Ax}$ by $\mathbf{L(Rx)}$, which can be computed much faster as $\mathbf{L}$ and $\mathbf{R}$ are tall and thin matrices. This is shown in Fig. 3 of the PDF.
**Clarification to Table 3 of the main paper**: Regarding the low-rank methods, namely Direct SVD and LPLR, we identified (and fixed) a software bug in the parameter enumeration code, which led us to mistakenly assess all outcomes under a fixed sketch size equivalent to a rank of 200. This caused a lack of parity with respect to the bit requirement for naive quantization, and the comparison was unfair. Consequently, the Frobenius norm values appeared relatively steady; the slight variance in LPLR stemming from distinct samples of the sketching matrix. We sincerely apologize for this error and present revised outcomes in Table 1 of the global response PDF, now encompassing a significantly broader range of bit budgets.
In Table 1, we assess the performance of LPLR, Direct SVD, and LSVD across a uniform range of bit budgets (including many that are not hardware primitives). This approach enables a more comprehensive examination of the approximation error and its variability when employing different low-rank approximation techniques with varying bit budgets. One can clearly observe that LPLR and LSVD (LPLR-SVD) outperform naive quantization for $1$ -- bit quantization, except in the extreme instance of ${\rm B} = 32$. This trend persists even as bit budgets increase. For higher bit allocations, the superiority of LPLR (and LSVD) over NQ in terms of approximation error is apparent when the bit allocation $\rm B$ is sufficiently low. This allocation strategy permits a greater allocation of storage to capturing higher-rank components. This delicate balance lies at the core of leveraging the effectiveness of LPLR, which excels in finding the optimal compromise between capturing the optimal number of low-rank factors with precision. We hope that this clarifies the rationale behind the scenario in which LPLR outperforms standard techniques, namely naive quantization.
Pdf: /pdf/566b4b1eea8fc0eafa98979cbf778d9ba9b54c28.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposed a memory efficient approach to approximate a matrix $A$ by: low-rank approximation $A=LR$ and quantization. The LPLR algorithm first applies a quantized random projection (RP) as the $L$, and then solve a minimization problem for the right loew-rank factor $R$, which is also quantized afterwards. Theoretical approximation error is obtained and compared with an alternative approach that quantized SVD low rank factors instead of RP. Experiments are conducted on image approximation and embedding classification, to show the effectiveness of the proposed method.
Strengths: 1. The paper is well-organized and easy to follow. The theoretical analysis seems rigorous.
2. Experiments on multiple ML tasks and datasets are provided which make the results more grounded.
Weaknesses: In my understanding, the main idea of the paper is to waive the need to compute the SVD of A, by using random projection (RP) as a surrogate. I have the following concerns and suggestions:
1. At line 220, the authors wrote that the bits per entry for SVD-quant is $O(nd\sqrt k)$, but in Table 1 and Table 2, it is $O(k\sqrt{nd})$. Please double check and clarify. Also, the authors simply stated that LPLR is better than SVD-quant in terms of bits per query, which is not true with some n, d, m, k (comparing the results in the table). I suggest the authors to carefully compare the results and state the regimes when LPLR is better, and when it is worse.
2. In the main Theorem 3.2, $\kappa$ could be negative, right? Is $1-c_4\sigma_k/\sigma_{k-1}$ bounded? Or do we need to further assume an eigen gap for this result to hold?
3. I understand that the main usage of LPLR is for matrix (data) approximation, so the first experiments (Figure 1 and Table 3) make sense to me. However, for the second set of experiments on classification, why not directly using $Q(AS)$ (i.e., the quantized random projections)? This saves the storage for W (in other words, we may increase the sketch size m when using Q(AS) only). Some recent references on this include
Random projections with asymmetric quantization, Li and Li NeurIPS 2019
Generalization error analysis of quantized compressive learning, Li and Li, NeurIPS 2019
Indeed, the research on QRP is highly related to this submission, since LPLR essentially does an optimization on W to recover the data from QRP. I suggest to add some discussion on this direction in the paper and some empirical comparisons.
4. Also, if LPLR is used for processing or storing the data for classification or search tasks, it might be inconvenient to handle new data points (e.g., in a streaming setting). Thus, it may not be suitable for such tasks. On the other hand, recently people are using low rank approximation in LLM fine-tuning frequently. Experiments related to fine-tuning language models could be a better application scenario for the proposed method.
5. Some references on similar results are missing. A similar result as in Appendix D that $S^TQ(AS)$ with uniform quantization has approximation error independent of $d$ has been established in [EDEN: Communication-Efficient and Robust Distributed
Mean Estimation for Federated Learning, Vargaftik et al., ICML 2022] (or maybe some even earlier paper) for rotation matrix $S$. This related result should be cited. Also, Eq. (2) is a standard result for uniform stochastic rounding. A reference should be added there.
In all, I think the paper proposes a simple but intuitive method for low-rank low-precision matrix approximation from QRPs. The idea is clear, the analysis seems sufficient (despite the above and below questions).The experiments can be improved, but the current results on several tasks and datasets are convincing enough to show the effectiveness of LPLR in matrix approximation. For now, I would recommand borderline accept.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Proposition D.1 only bounds the error between $Q(Sx)$ and $Sx$. How does it imply $||S^TQ(Sx)-x||^2$ is also a constant?
2. What solver is used to solve Algorithm 1 line 4? This should be clarified. Is closed-form feasible?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We are grateful for the time you spent in reading our paper and writing the review. We address your concerns below.
> $Q(AS)$ for classification
The works of Li & Li (2019), although related, are different from matrix compression addressed in our work. They study nearest neighbor search by approximating $Ax$ by $Q(AS)Q'(S^Tx)$, where rows of $A$ are datapoints, $x$ is the query vector, and $S$ is a Gaussian sketching matrix.
1. Storage: In Li & Li, in addition to $Q(AS)$, $S$ needs to be stored too, to process an incoming query $x$. So, the storage is $nm$ quantized and $dm$ full-precision (FP) entries. $S$ needs to be stored in FP because incoming $x$ needs to be processed. In contrast, LPLR stores only the quantized entries of $L$ and $R$, i.e., $nm + md$ quantized entries, and $Ax \approx LRx$. Hence, storage of LPLR is smaller.
2. Computation: In Li & Li, a computation of $S^Tx$ is needed for every query $x$. This requires $md$ FP mults, and computing $Q(AS)Q(S^Tx)$, i.e., $nmd$ quantized mults. In contrast, for LPLR, we compute $LRx$ with $m(n + d)$ mults, which is faster.
Nevertheless, we acknowledge that these very interesting works and will definitely cite them.
> Regimes when LPLR is better
Thank you for pointing this out. The bit requirement is indeed $O(k\sqrt{nd})$ for direct-SVD quant., and line 220 is a typo which we will rectify. Tables 1 and 2 represent two distinct regimes. In Table 1, we assume that the $\ell_2$ -- norm of the $i^{th}$ row of matrix $A$ is bounded by a constant, denoted as $||a^{(i)}|| = O(1)$. Conversely, in Table 2, we consider that each entry of matrix $A$ is bounded by a constant, indicated by $A_{ij} = O(1)$.
For the scenario described in Table 1, the bit-requirement for direct-SVD is $0.5\log_2(O(k\sqrt{nd}))$. Meanwhile, for LPLR, the bit-requirement is $0.5\log_2(\tilde{O}(nm/\sqrt{d}))$, disregarding the logarithmic terms. Evidently, LPLR demands fewer bits than direct-SVD because $k$ and $m$ are much smaller than $min(n,d)$, given that $n$ and $d$ can be substantially larger than $k$ and $m$ for inherently low-rank matrices. In the regime presented in Table 2, the bit-requirement for direct-SVD quantification remains $0.5\log_2(O(k\sqrt{nd}))$, unchanged from before. However, LPLR now requires $0.5\log_2(\tilde{O}(nm\sqrt{d}))$, slightly more than direct-SVD due to the additional $\sqrt{n}$ factor inside the logarithm. Thus, it makes sense to expect that direct-SVD can perform better in this regime.
This observation is further supported by our numerical simulations in Tables 4 to 6, where direct-SVD indeed outperforms LPLR in certain scenarios. Nevertheless, it is crucial to emphasize that direct-SVD necessitates computing the SVD, which can be prohibitive for very large matrices due to the current memory limitations of available GPUs, making LPLR the only viable option. As discussed in lines 1204 to 1210 of App. K, if our objective is merely to compress an input matrix $A$ without concerning ourselves with the computational effort needed for compression, we can try all compression techniques and choose the one that yields the minimum Frobenius norm error.
Thank you for raising this concern and we will definitely add these discussions to our main text.
> Thm. 3.2, $\kappa$ could be negative?
No, $\kappa$ is always positive. Yes, $(1 - c_4\sigma_k/\sigma_{k+1})$ can be negative in the statement of Thm. 3.2, but we do not need to assume an eigen gap for this result to hold true. If $1 - c_4\sigma_k/\sigma_{k+1} \leq 0$, we should set $\kappa = \kappa(A)$. This is evident from the proof of Lem. E.2 in lines 868 to 873. Our correct expression for $\kappa$ would be $\kappa = min(\kappa(A), \kappa(A_k)(1 - c_4\sigma_{k+1}/\sigma_k)^{-1})$ if $1 - c_4\sigma_k/\sigma_{k+1} > 0$, and $\kappa = \kappa(A)$, otherwise. Thank you for pointing this out, and we will rectify it in the statement of the theorem.
> Streaming setting & LLM fine-tuning
This is a very interesting point. It is possible to extend LPLR to streaming data settings using sketching based low-rank approximation. In this regard, LPLR is easier to convert to a streaming algorithm than direct SVD based quantization. A recent work on sketching based streaming low rank approximation is *Streaming Low-Rank Matrix Approximation with an Application to Scientific Simulation*, Joel A. Tropp et. al., SIAM Journal on Scientific Computing (2019), url = [https://doi.org/10.1137/18M1201068](https://doi.org/10.1137/18M1201068). Experiments related to fine-tuning large language models is also a potential application scenario we can consider. We will add related discussions on these in App. K.
> Prop. D.1
Note that: $||S^TQ(Sx) - x||^2 \leq ||Q(Sx) - Sx||^2 + ||S^TQ(Sx)||^2 - ||Q(Sx)||^2 + ||x||^2 - ||Sx||^2 \leq ||Q(Sx) - Sx||^2 + ||S^TQ(Sx)||^2 + ||x||^2 \leq ||Q(Sx) - Sx||^2 + {\rm R}^2(\sigma^2_{max}(S) + 1)$.
We know that $\sigma^2_{max}(S) \leq \frac{d}{m}$ with high probability for Gaussian $S$. The error only depends on the aspect ratio $d/m$, and not $d$ directly. Although we provide the above justification as a response to the question, in LPLR we do not explicitly compute the $S^TQ(AS)$ anywhere. The effect of sketching $AS$ in the first low-rank factor are nullified by the second low-rank factor.
> Solver for $W$
Yes, the solution of this problem is available in closed-form as $W^* = Q(AS)^{\dagger}A$, used in the analysis and also in the numerical simulations. We prefer to keep the general form in line 4 of Alg. 1 because one use an approximation of $W^*$, obtained using conjugate gradient descent, instead of the closed form expression.
> Missing references
Thank you for pointing out this nice work by Vargatik et. al. (2022). We have discussed the literature related to this in Sec. 1.1 (lines 76 to 84), which includes the work DRIVE [67], by the same set of authors. We will add the new reference to the list as well.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: Thanks for the reply. I still have some follow-up questions:
1. I don't get why LPLR needs less computations. $Q(AS)$ has the same size as $L$, and $Q(S^Tx)$ has the same size as $Rx$, right?
2. Thanks for clarifying. Please add this analytic comparison to the paper and update the theorem statement.
3. Can we extend the method in the mentioned SIAM paper to LPLR to handle the streaming data setting? If so, I suggest adding some discussion and describing the general ideas. If not, it should be mentioned in the limitation section that LPLR should be used for fix/static data.
4. Prop. D.1: so the reconstruction error $||S^TQ(Sx)-x||^2$ is actually not $O(1)$ but is $d/m$ which increases with $d$?
---
Reply to Comment 1.1.1:
Title: Clarifications to follow-up questions
Comment: Dear Reviewer,
Thank you once again for your thorough review, which really helped improve the clarity of our paper. Below, we have attended to your follow-up concerns:
> Why LPLR needs less computations.
We apologize for the confusion. Computing $Q(AS)Q(S^Tx)$ requires $nm$ multiplications (not $nmd$), which is the same as LPLR. You are correct -- we agree that the computation speedup of LPLR is not due to reduced number of multiplications. Nevertheless, in Li \& Li, the sketching matrix $S$ needs to be stored in full-precision (FP) for computing $S^Tx$ for any incoming $x$. Contrary to this, LPLR requires computation of $Q(W)x$, where $Q(W)$ consists of $md$ quantized values, which can leverage modern advancements in hardware primitives for speeding up low-precision computations (eg., half and mixed-precision compute). Thank you for bringing our attention to this.
> Please add this analytic comparison to the paper and update the theorem statement.
Thank you for acknowledging. We will make the necessary edits to the paper.
> Can we extend the method in the mentioned SIAM paper to LPLR to handle the streaming data setting? If so, I suggest adding some discussion and describing the general ideas.
An outline of how LPLR can be extended to the streaming setting is as follows:
Suppose we have a matrix $A_n \in \mathbb{R}^{n \times d}$ with $n$ datapoints, for which we store the sketching matrix $S \in \mathbb{R}^{d \times m}$, the left factor $(L_n)$, and the right factor $(R_n)$. For an incoming datapoint $a_{n+1}$, we can simply update the left factor as: $L_{n+1} \gets [L_n; Q(a_{n+1}S)]$, where $;$ denotes the concatenation of an additional row. The second low-rank factor, which is $R_{n+1} = argmin_W\lVert L_{n+1}W - A_{n+1}\rVert_F$, can be computed from $R_n$ in a fashion similar to online least squares, which uses Woodbury matrix inversion lemma.
Thank you once again for highlighting this setting. This is a very interesting observation and we will add a discussion regarding this to the main paper.
> Prop. D.1: so the reconstruction error $\lVert S^\top Q(Sx) - x \rVert^2$ is actually not $O(1)$ but is $d/m$ which increases with $d$?
Yes, the reconstruction error $\lVert S^TQ(Sx) - x\rVert^2$ scales as $d/m$. But it does not necessarily increase with $d$ if we choose the sketch size $m$ to be proportional to $d$, i.e., $m = O(d)$. In that case, $d/m$ will be $O(1)$. We are grateful for your careful scrutiny. We will add the above discussion to the paper, and mention that the reconstruction error $\lVert S^\top Q(Sx) - x \rVert^2$ does not grow with dimension as long as the sketch size $m$ is proportional to the dimension $d$. | Summary: The paper studies the low-rank factorization of the matrix in the low-precision setting and proposes a new algorithm which is a combination of randomized low-rank approximation method and quantization. The paper formally analyzes the guarantee of the proposed algorithms and also give experiments on real world dataset which demonstrate the advantage of the proposed algorithm.
Strengths: 1. The presentation of the paper is good. The writing of the paper is clear and easy to follow.
2. To get the formal guarantee, the authors do a careful analysis, which I think is non-trivial.
Weaknesses: 1. Technical novelty: the main algorithm (Algorithm 1) seems to just be the standard way in randomized numerical linear algebra then plusing the quantization. Can you authors give more explanations about the technical novelty? (though the analysis I think is not standard)
2. Experiment: I have some questions about the setting and details of the experiments section. See the next question.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Theorem 3.2 shows the guarantee of the proposed algorithms. However, would the sketch size $m$ also have the requirement given the accuracy parameter $\epsilon$ and $k$? Also in experiment it may be a factor that affects the runtime and accuracy a lot, is there some place indicating the choice of $m$?
2. I am a little confused about the naive quantization baseline. Can this way make the matrix be low-rank?
3. As mentioned in the paper, the sketching-based method is popular in randomized low-rank approximation. I think in the experiment the baselines should also include it (with the naive way of quantization).
4. The paper discusses the runtime complexity. Hence, I think it would be better to include the runtime in the experiment section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: See the above questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for the time you invested in reading our paper and writing the review. We appreciate that reviewing is a voluntary effort and below, we address each of your concerns to resolve your queries.
> Choice of $m$
Indeed, the sketch size $m$ is an important design parameter. We have explained our choice of $m$ for experiments in lines $269$ -- $276$, where we mention that $m$ is selected so that the bit-budgets are identical between naive quantization and LPLR, i.e., parity is ensured. In Table $3$, the choice of $m$ are the values mentioned in the {\it Rank} column. Note that the values of $m$ satisfy $(nm + dm)\cdot{\rm B} \leq nd\cdot {\rm B_{nq}}$, i.e., $m = \left\lfloor \frac{nd \cdot{\rm B_{nq}}}{(n+d)\cdot{\rm B}} \right\rfloor$. We optimize the choice of $m$ so that the bit requirement of LPLR does not exceed that of naive quantization. The same holds true for embedding classification experiments.
Theoretically, if the matrix $\mathbf{A}$ has approximate rank $k$, i.e., $\lVert \mathbf{A}_k - \mathbf{A} \rVert_F$ is sufficiently small, it suffices to choose $m$ to be slightly larger than $k$, such as $m = k + p$, where $p \geq 2$ is the oversampling factor. The impact of the oversampling factor $p$ has been extensively studied in previous works, for example, see references $[16]$ and $[74]$.
> Can naive quantization make the matrix be low-rank?
Naive quantization *does not* make the matrix low-rank. We have included it as a baseline since the primary goal of the paper is to compress the matrix, and naive quantization, which does not exploit low-rank structure, is a standard practice in existing literature. In our comparisons, we show that there is a better way to compress the matrix, namely, LPLR, which exploits the low-rank structure and attains a smaller error than naive quantization, while maintaining parity w.r.t. the bit-requirement.
> Sketching + Naive quantization as baseline
Utilizing sketching along with naive quantization would sketch the columns of $\mathbf{A}$ as $\mathbf{AS}$, and find the right factor as $argmin_{\mathbf{W}}\lVert \mathbf{ASW - A} \rVert_F^2 = \mathbf{(AS)^{\dagger}A}$. A subsequent naive quantization would give the factorization ${\rm Q}({\bf AS}){\rm Q'}(({\bf AS})^{\dagger}{\bf A})$. This is indeed an alternative benchmark, although it cannot achieve a lower error compared to LPLR because:
$\lVert{\rm Q} ({\bf AS}){\bf W}^* - {\bf A}\rVert_F^2 \leq \lVert {\rm Q}({\bf AS}){\rm Q'}(({\bf AS})^{\dagger}{\bf A}) - {\bf A}\rVert_F^2$, as ${\bf W}^* = argmin_{\bf W}\lVert {\rm Q}({\bf AS}){\bf W} - {\bf A}\rVert_F^2 = {\rm Q}({\bf AS})^{\dagger}{\bf A}$. If ${\rm B'}$ is sufficient so that ${\rm Q}'({\rm Q}({\bf AS})^{\dagger}{\bf A})$ is closer to ${\rm Q}({\bf AS})^{\dagger}{\bf A}$ than ${\rm Q}'(({\bf AS})^{\dagger}{\bf A})$, then LPLR will have a smaller error.
> Novelty
We propose an idea which is simple in its execution and pragmatic in its implementation. Our idea is well supported for modern linear algebra libraries, while offering theoretical analysis to effectively identify the regime in which it offers value over conventional, standard methods. Moreover, our method offers a tunable knob to achieve effective bit-rates in software, not currently supported by hardware. For example, in Table $3$, using LPLR with appropriate parameters bits is equivalent to operating at $1$ or $2$-bits per pixel, despite hardware working with a higher bit precision. Moreover, our method is adaptable to future hardware advancements, such as the emergence of $4$-bit GPUs, which can significantly speed up our technique's primitive operations. We believe that the simplicity of our algorithm adds to its appeal and makes it a valuable contribution to the literature, bridging the fields of matrix compression, quantization, and sketching while ensuring compatibility with upcoming hardware developments.
> Runtime experiments
Thank you for the valuable suggestion. As a response, we have included some additional experiments on the wall-clock runtime comparison of our method. They can be found in Table 2 and Fig. 3 of the global response PDF.
Table 2 now presents the wall-clock time required for compressing the embeddings of the CIFAR-10 and CIFAR-100 datasets. It is evident that LPLR significantly outperforms DSVD and LSVD in terms of speed. Fig. 3 complements the tabulated data by visualizing the runtime involved in two scenarios. The first scenario involves computing the matrix-vector multiplication $bf Ax$ directly. The second scenario involves approximating this multiplication using a low-rank factorization, specifically $\bf L(Rx)$, where matrices $\bf L$ and $\bf R$ are tall and wide matrices, respectively. This low-rank factorization strategy reduces the overall number of multiplications from $nd$ to $(n + d)m$.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I still have the follow-up question:
1. As mentioned by the authors, naive quantization does not make the matrix low-rank. However, if the bit-budget is the same, why the LPLR has a better error than naive quantization in Table 3. In that case, would naive quantization get the optimal error in this setting?
---
Reply to Comment 1.1.1:
Title: Response to follow-up question
Comment: Thank you for your question. To clarify your concern, allow us to revisit the relationship of LPLR with its antecedents for compressing models, namely Direct SVD (which reduces the total number of parameters by leveraging low-rankness) and Naive quant (which reduces the number of bits used to represent each parameter). The efficacy of LPLR relies on finding a harmonious equilibrium between these two key factors.
In Table 3, we do not impose prior expectations on the performance of any particular algorithm. For instance, we don't possess any prior knowledge of the optimal target rank. However, given our analytical demonstration that there exists a regime (a combination of target rank and bit budget *per parameter*) where LPLR can surpass its baselines, we highlight this scenario (extreme model compression) in the *revised* Table 3, which corresponds to what is now Table 1 of the global response pdf. Naive Quant is indeed capable of outperforming LPLR/DSVD/LSVD (and does so) where significant model compression is not a requirement. This can be observed in rows $1$ to $4$ of the second sub-table in the global pdf.
We hope this clarifies your query. In what follows, we further elaborate this.
Let's consider compressing $A \in \mathbb{R}^{n \times d}$. If we choose our target rank (essentially, the sketch size) to be $k$, leading to an approximation of $A \approx LR$, where $L \in \mathbb{R}^{n \times k}$ and $R \in \mathbb{R}^{k \times d}$, the total parameter count in $L$ and $R$ becomes $k(n + d)$, instead of the original $nd$ parameters in $A$. If we employ naive quant., which directly quantizes each element of $A$ without capitalizing on any low-rank structure, the total number of bits utilized amounts to $ndB_{nq}$, where $B_{nq}$ denotes the number of bits assigned for quantizing each entry. On the other hand, in the case of LPLR/DSVD/LSVD, where $k(n + d)$ parameters are quantized, the overall bit consumption equals $k(n + d)B$, where $B$ represents, once again, the bit allocation per entry (potentially distinct from $B_{nq}$).
In Table 3, a fair comparison is upheld by ensuring that each algorithm receives an equal allocation of resources—specifically, they utilize the same total number of bits. This similarity in resource allocation serves as the basis for evaluating their effectiveness in judiciously allocating the bits to maintain performance. In Table 3, the relevant columns, namely $B$ and $B_{\rm nq}$, correspond to LPLR/DSVD/LSVD and naive quantization, respectively. However, it is important to emphasize that we guarantee that the expression $$ \text{Total Bit Budget} = \text{Number of parameters} \times \text{Bit budget per parameter} = k(n+d)B = ndB_{nq}$$
remains consistent **across all algorithms**. For matrices that can be effectively approximated by low-rank methods, $\lVert A_k - A \rVert_F$ (where $A_k$ is the best rank-$k$ approximation) is small for some $k \ll {\rm min}(n,d)$. For such matrices, there's room to allow $B$ to exceed $B_{nq}$. This flexibility permits us to employ more bits per parameter while maintaining the same total bit count. The choice of hyperparameter $k$ plays a pivotal role in navigating the trade-off between the error stemming from low-rank approximation and the error resulting from quantization due to precision limitations. A smaller value of $k$ translates to a higher number of bits per parameter, leading to reduced quantization error. Conversely, a larger value of $k$ diminishes low-rank approximation error, but results in fewer bits per parameter, subsequently increasing the quantization error. **Matrices inherently characterized by low-rank traits allow for a substantially smaller $k$ value to be chosen, and within this context, LPLR exhibits superior performance compared to naive quantization.**
If any concern still remains, please don't hesitate to ask for further clarification. | Summary: The paper introduces a low rank, quantized/low precision matrix factorization which decomposes an n x d matrix A in the form A= LR, where L (of size n x m) and R (of size m x d) are low rank factors. L and R are computed using a random projection matrix S in the form L = Q(AS) and R = Q'(W^*) where W^* is the matrix minimizing the squared Frobenius norm ||Q(AS)W− A||. Q and Q' are two independent quantizers with specified budgets.
The authors contrast their method with an SVD based method for computing the quantized low rank approximation which instead sets L = Q(U_k S_k) and R = Q'(V_k), where U_k/V_k and S_k are the singular vectors/values respectively. The paper has a theorem deriving a bound on the Frobenious norm of the factorization eror. They apply the approximation on image data (for image compression) and embedding matrices (for an embedding classifation task).
Strengths: + The paper is very well written and very clear in its presentation
+ Clear technical presentation incl. theorems, algorithms etc.
+ Novel idea, providing good review of relevant literature (different matrix sketching approaches, quantization etc.)
+ Thoughtful experiments to demonstrate real world application (using embedding compression)
Weaknesses: I don't see any weaknesses that need to be addressed at this moment.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * In Table 3, why are the SVD and Naive quant. Frobenious norm errors the same for different rank choice? Is it because the matrices are really low (<15) rank?
* I think Table 3 may be expanded to a set of figures to better convey the message (with potentially more bit budgets).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: This is an optimization paper and as such the limitations are not immediately obvious, though the eventual use of the model for downstream compression/computation speed up could be quite broad. The societal impact could also be positive given the matrix compression can lead to compute/energy savings. The paper doesn't have a dedicated discussion/section on any limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We are grateful for the valuable time you spent in reading our paper and writing the review. The voluntary nature of the review process is truly valued. In what follows, we address your questions:
> Similar Frobenius norm errors for different rank choice
For naive quantization, the Frobenius norm is not influenced by the rank but is exclusively determined by the allocated bit budget, denoted as $B_{nq}$. As a result, it maintains a consistent value when the bit budget for naive quantization remains unchanged.
Regarding the low-rank methods, namely Direct SVD and LPLR, we identified (and fixed) a software bug in the parameter enumeration code, which led us to mistakenly assess all outcomes under a fixed sketch size equivalent to a rank of 200. This caused a lack of parity with respect to the bit requirement for naive quantization, and the comparison was unfair. Consequently, the Frobenius norm values appeared relatively steady; the slight variance in LPLR stemming from distinct samples of the sketching matrix. We sincerely apologize for this error and present revised outcomes in Table 1 of the global response PDF, now encompassing a significantly broader range of bit budgets.
In Table 1, we assess the performance of LPLR, Direct SVD, and LSVD across a uniform range of bit budgets (including many that are not hardware primitives). This approach enables a more comprehensive examination of the approximation error and its variability when employing different low-rank approximation techniques with varying bit budgets. One can clearly observe that LPLR and LSVD (LPLR-SVD) outperform naive quantization for $1$ -- bit quantization, except in the extreme instance of $\rm B = 32$. This trend persists even as bit budgets increase. For higher bit allocations, the superiority of LPLR (and LSVD) over NQ in terms of approximation error is apparent when the bit allocation $\rm B$ is sufficiently low. This allocation strategy permits a greater allocation of storage to capturing higher-rank components. This delicate balance lies at the core of leveraging the effectiveness of LPLR, which excels in finding the optimal compromise between capturing the optimal number of low-rank factors with precision. We hope that this clarifies the rationale behind the scenario in which LPLR outperforms standard techniques, namely naive quantization.
> Table 3 may be expanded to a set of figures to better convey the message
Thank you for suggesting this. We concur completely, and are forced to prioritize due to the limited number of pages available. We will certainly add a number of figures to the appendix to better convey the efficacy of our method. We have included additional results involving a wider range of bit budgets in the global response PDF (ref. Table 1), which provides a clearer picture of the method's benefits.
> Dedicated discussion/section on limitations.
We have discussed some limitations in Appendix K.
---
Rebuttal Comment 1.1:
Title: Thanks for your response.
Comment: Thank you for the detailed response. It addresses my concerns and questions and it's great that you caught and addressed the bug that caused the steady errors across parameters. | null | null |
Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors | Accept (spotlight) | Summary: This paper presents a novel fMRI-to-image approach (MindEye) that can achieve excellent reconstructions of natural scenes. The main idea is to retrieve and reconstruct viewed images from fMRI with brain activity information. The model consists of two parallel submodules. The retrieval submodule uses contrastive loss to retrieve images and generate features with high cosine similarity with the corresponding image embeddings. In the reconstruction submodule, the diffusion prior and CLIP image embedding are used to generate aligned semantic features. In order to better reconstruct the image, a separate encoder is trained to generate low-level features, and jointly output the reconstructed images with the previous semantic features. The experimental results also demonstrate the superior performance of the method.
Strengths: Interesting idea. This paper provides a novel method to achieve fMRI-to-image task, and explores this task by introducing retrieval, diffusion prior and low-level information. Aside from the framework, the experimental results also fully illustrate its performance.
Weaknesses: 1. In this paper, most of the modules are existing models. The novelty of the paper requires further elaboration.
2. Lack of theoretical proof, most of them are descriptions. For example, the aligned embeddings mentioned in the paper, whether there is a theoretical proof of the rationality of the aligned embeddings for image reconstruction.
3. The network structure uses multiple large-scale models, such as Clip and Diffusion, which are relatively time-consuming, while the paper lacks an explanation of the computational cost.
4. Limited datasets. Only NSD dataset is used, and the generalization of the model is not considered.
5. The experiment is valid, only the performance of each sub-module is considered. The performance after removing a certain submodule is not clear.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. How about the comparisons with other methods in terms of computational cost and model parameters?
2. For data augmentation method, is using generative methods to generate more samples an alternative?
3. Is the diffusion prior module pre-trained? Also, I wonder what is the benefit of using a diffusion prior?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Please refer to the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: Most of the modules are existing models. The novelty of the paper requires further elaboration.
A1: MindEye relies on models trained on billions of image/text samples. NSD provides <30,000 fMRI samples per participant. We argue that part of the novelty of our approach is to use models like CLIP trained with massive datasets as a teacher to guide training of our brain models where we lack data. Note our diffusion prior is trained from scratch and that other novelties of our paper are summarized in our global response.
> Q2: Lack of theoretical proof ... for example, ... theoretical proof of the rationality of the aligned embeddings for image reconstruction.
A2: We now discuss existing proof for the rationality of aligned embeddings:
Multimodal contrastive learning will always produce disjointed embeddings because of the “modality gap” phenomenon whereby encoding modalities into a shared space restricts the effective embedding space to a narrow cone in geometric space. Liang et al. (2022) show theoretically and empirically that contrastive learning induces a modality gap phenomenon. This modality gap phenomenon explains why our CLIP image embeddings and CLIP fMRI embeddings are isolated to different regions of the same shared embedding space (UMAP plots in Appendix A.4). We use pre-trained models that expect CLIP image embeddings as input, thus motivating our training of a diffusion prior to align disjointed embeddings.
> Q3: How about the comparisons with other methods in terms of computational cost and model parameters?
A3: MindEye is modular and doesn’t need all the large scale models at all times. CLIP is used only during training. Similarly, Versatile Diffusion is only needed during inference. Even though our parameter count is high, MindEye can be trained on a single A100 in less than 18 hours. This is because our models primarily consist of MLPs which are faster to compute than transformers or CNNs.
At inference time, the diffusion prior can be dropped if only retrieval is needed. As stated in Appendix A.2.1, our diffusion prior is also faster than the DALLE-2 diffusion prior as it only needs 100 timesteps instead of 1000. It is also more computationally efficient because we modified the architecture to not have learnable queries and to instead directly predict denoised CLIP embeddings. For reconstruction, any off-the-shelf image generation method that accepts CLIP embeddings can be used, depending on computational cost requirements.
We have added a new table to Appendix A.10 comparing MindEye’s parameter counts with other methods.
> Q4: Limited datasets. Only NSD dataset is used, and the generalization of the model is not considered.
A4: We agree that generalization is important – our ultimate goal is to show results that are comparable to our NSD findings in other, independently-collected datasets. The main challenge here is that all other fMRI datasets of this kind are much smaller than NSD, by an order of magnitude, and the sheer size of NSD is a major contributor to the very high quality of the results shown here (see Appendix A.9, attached as a pdf to our global response, to see how performance scales as a function of training set size). To obtain comparable performance on smaller datasets, we will need to find a suitable way of combining data across subjects; this would allow us, e.g., to train on large datasets like NSD and then fine-tune and test on a smaller dataset incorporating different subjects. This “across-subject alignment” problem in fMRI data analysis is difficult because subjects differ in brain structure (which can lead to different input dimensionalities and potential misalignment across voxels) and in life experiences (which can lead to functional organization of visual concepts in their brains being different, even if their brains are structurally aligned). For the present work, we focused on optimizing single-subject decoding in the “high data” setting of NSD; going forward, we will be changing the model architecture to explicitly learn a shared-subject embedding space that supports across-subject decoding (we now discuss this in the Conclusions). Based on other alignment findings in the fMRI literature (e.g., “shared response modeling”, Chen et al., 2015), we think this approach has extensive promise. However, implementing the shared embedding space in MindEye requires substantially more work to complete and we do not have results yet for this exploration.
> Q5: The performance after removing a certain submodule is not clear.
A5: Performance with and without each of the 2 submodules is mentioned in Table 4. We also show the corresponding reconstructions for these ablations in Figure 8 in the Appendix. We also show retrieval and reconstruction performance when using just the backbone, backbone + projector, backbone + prior, and backbone + prior + projector.
> Q6: For data augmentation method, is using generative methods to generate more samples an alternative?
A6: Our model needs paired fMRI and visual stimulus data for which there are limited datasets. Establishing generative models for fMRI data is an important research problem in its own right, and at present there are not well-established methods for doing this. This is why we use mixup to create synthetic fMRI responses for MindEye.
We considered augmenting image samples by randomly replacing the prior’s target with embeddings of generated image variations, instead of the ground truth image. We expect this could make the prior better at modeling the distribution of target images. But this augmentation would also slow down the training pipeline considerably.
> Q7: Is the diffusion prior module pre-trained? Also, I wonder what is the benefit of using a diffusion prior?
A7: The diffusion prior is not pre-trained. We show benefits of using a diffusion prior in Table 4. Figure 9 in the Appendix shows UMAP projections of CLIP image embeddings before and after the diffusion prior.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' rebuttal which clarifies some of my concerns. While the methodology can be further improved, the paper has its scientific merits. After reading the detailed responses, I tend to raise my score to weak accept. | Summary: The authors present MindEye, an innovative fMRI-to-image approach that utilizes contrastive learning and a diffusion prior for retrieval and reconstruction tasks. They conduct thorough comparisons, establishing MindEye's superiority over existing methods in terms of both qualitative and quantitative evaluations. The authors attribute the success of MindEye to its specialized submodules, improved training techniques, and models with increased parameterization. Additionally, they showcase MindEye's ability to preserve low-level image features in reconstructions through the use of img2img. Overall, MindEye represents a significant advancement in the field, pushing the boundaries of fMRI-based image retrieval and reconstruction.
Strengths: The authors conducted a comprehensive comparison of MindEye with existing methods, employing both qualitative side-by-side comparisons and quantitative evaluations. The results demonstrate that MindEye achieves state-of-the-art performance in both reconstruction and retrieval tasks. Notably, MindEye excels at retrieving the exact original image even when faced with highly similar candidates, indicating that its brain embeddings retain fine-grained, image-specific information. This remarkable capability enables accurate image retrieval even from extensive databases such as LAION-5B. The paper is well written and the presentation is to the point.
Weaknesses: The methodology employed in this study has some limitations in terms of originality, as a majority of the approach relies on external state-of-the-art models. The authors primarily train simple MLPs and utilize a pre-trained diffusion prior.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Exploring the applicability of the model to different patients would yield valuable insights. It raises the question of whether the model can provide meaningful results beyond the specific patient it was trained on.
2. A sensitivity analysis investigating the relationship between image output and fMRI input would be highly intriguing. It would shed light on the crucial components of the input that contribute to generating the CLIP embedding and ultimately influence the quality of the reconstructed image.
3. The authors suggest that increasing the number of parameters improves the results. It would be informative to know if they experimented with deeper networks to explore the potential benefits of a more complex architecture.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: 1. Lack of Methodological Originality: The study relies heavily on external state-of-the-art models, which diminishes the originality of the methodology. The authors predominantly employ simple MLPs and a pre-trained diffusion prior, which limits the novelty of their approach.
2. Applicability to Different Patients: Exploring the generalizability of the model to diverse patient populations would be valuable. It is essential to understand if the model can yield meaningful results beyond the specific patient it was initially trained on.
3. Sensitivity Analysis: Conducting a sensitivity analysis would provide valuable insights into the relationship between image output and fMRI input. Understanding which specific components of the input are crucial for generating the CLIP embedding and influencing the quality of the reconstructed image would enhance the understanding of the model's behavior.
4. Deeper Networks and Parameterization: The authors suggest that increasing the number of parameters improves the results. It would be beneficial to know if they explored the use of deeper networks, as a more complex architecture may have potential benefits. Investigating the effects of different network depths could shed light on the impact of model complexity on performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and feedback!
> Q1: Lack of Methodological Originality: The study relies heavily on external state-of-the-art models, which diminishes the originality of the methodology. The authors predominantly employ simple MLPs and a pre-trained diffusion prior, which limits the novelty of their approach.
A1: We now clarify that our diffusion prior is trained from scratch in the “Diffusion Prior” section. That is, our diffusion prior is trained from scratch to align our disjointed CLIP fMRI embeddings, which are then fed to a pretrained Versatile Diffusion model to output image reconstructions.
MindEye relies on external state-of-the-art models that were trained on billions of image and text data samples. Critically, none of these existing models were trained with brain data and NSD provides less than 30,000 training samples per participant, which is orders of magnitude fewer data points than used for models like CLIP or Versatile Diffusion. We argue that part of the novelty of our approach is to leverage models like CLIP that were trained with massive datasets as a teacher to guide the training of our brain models where we have a relative scarcity of data. We have now clarified this nuance in the “Conclusions” section.
> Q2: Applicability to Different Patients: Exploring the generalizability of the model to diverse patient populations would be valuable. It is essential to understand if the model can yield meaningful results beyond the specific patient it was initially trained on.
A2: A limitation that we discuss in the paper is that all MindEye models are subject-specific, meaning that they cannot be generalized to other subjects. One reason for the subject-specificity of the model is that each person has a differently shaped / sized brain and thus different numbers of voxels, resulting in different input dimensionalities and potential misalignment across voxels. Another reason is that different people have had different life experiences, so the functional organization of visual concepts in their brains could be different even if the brains are structurally aligned. A future goal our group is tackling is to change the model architecture to explicitly learn a shared-subject embedding space to potentially support across-subject decoding; based on prior “functional alignment” findings in the fMRI literature from Haxby, Ramadge, and others (e.g., “hyperalignment” and “shared response modeling” papers; Haxby et al., 2020, Chen et al., 2015) we think this approach has extensive promise. However, implementing the shared embedding space in MindEye requires substantially more work to complete and we do not have results yet for this exploration.
> Q3: Sensitivity Analysis: Conducting a sensitivity analysis would provide valuable insights into the relationship between image output and fMRI input. Understanding which specific components of the input are crucial for generating the CLIP embedding and influencing the quality of the reconstructed image would enhance the understanding of the model's behavior.
A3: Model interpretability methods like GradCAM (Selvaraju et al., 2016) and Network Dissection (Bau et al., 2017) could be used to determine the fMRI voxels that most strongly respond to the presence of certain image features. Similar work has already been done using the Natural Scenes Dataset in Sarch et al (2023), where they identify the most significant image features corresponding to every voxel. Their models are the reverse of MindEye, where an input image is used to predict the fMRI response for specific brain regions. Another direction could be to visualize reconstructions from synthetic fMRI inputs where voxels outside a given brain region are set to zero and voxels inside the brain region are set to an arbitrarily high value and fed into the pretrained MindEye model. Resulting reconstructions would emphasize the image qualities important to that brain region (see Lin et al., 2022; Ozcelik et al., 2023). Such sensitivity analysis could help visualize the functional specialization of the brain and the most important voxels for reconstructions. We now discuss the above prior work and this research direction in the Conclusions section.
> Q4: Deeper Networks and Parameterization: The authors suggest that increasing the number of parameters improves the results. It would be beneficial to know if they explored the use of deeper networks, as a more complex architecture may have potential benefits. Investigating the effects of different network depths could shed light on the impact of model complexity on performance.
A4: We observe diminishing returns from training larger models, because of the limited dataset size. The performance when going from 2 to 4 resblocks only improves when using skip connections, and even then the improvement is around 1% with a 33M increase in parameter count (Table 2). This suggests that a larger dataset would be needed for bigger models to demonstrate noticeable improvements.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal.
The authors' response is in line with what I anticipated. I hold the view that the stated limitations of this study, though few, still hold merit. As highlighted by the authors, these limitations could potentially be explored further in future research based on this foundation. | Summary: This paper introduces a method for fMRI-to-image conversion that facilitates realistic image reconstruction from brain recordings. The proposed method comprises two stages: semantic (high-level) and perceptual (low-level) pipelines. In the semantic stage, fMRI voxels are projected into the image space using a Multilayer Perceptron (MLP) and a diffusion prior. Notably, the paper employs the contrastive learning of the CLIP model to map fMRI data into a pre-trained CLIP model. Subsequently, the diffusion prior is used to map CLIP text embedding to image embedding. On the other hand, in the perceptual pipeline, image embeddings are mapped using an MLP projector and a CNN decoder. The method's experimental results are derived using the NSD dataset.
Strengths: The method presented in the paper is clear, and the validity of the claims is compelling.
I also appreciate the experimental evaluation conducted, which yields promising results both from a quantitative and qualitative standpoint.
The paper presents a commendable ablation analysis of various loss functions and data augmentation strategies.
Weaknesses: I think the main weakness of the method is the use of pre-trained models and evaluations with large datasets only. I was wondering what will happen when we train a model without having a pretraining model.
Having employed contrastive learning in the form of CLIP, with other proposed modules, I think the overall novelty of the paper might not be very high. However, I think the main contribution of the paper is clear and robust.
The presented method is quite large which makes it hard for building it with restricted resources.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I wonder how the proposed model manages the domain shift when the structure of the training functional magnetic resonance imaging (fMRI) data significantly differs from that of the downstream task.
Regarding the architectural improvement ablation, what about potential implications of training the overarching concept using a more compact architecture with fewer parameters. It would be interesting to explore if a streamlined model could yield similar or improved results while offering more efficient computational processing and potentially mitigating the risk of overfitting.
What about the computational complexity of the proposed method, specifically concerning image generation? How does this model compare to other methods in terms of computational demands?
The computation complexity would be crucial for understanding its practical applicability, particularly in settings where computational resources may be limited.
Finally, I am curious about the choice of dataset. The paper utilizes NSD dataset for model training and evaluation. Could the model generalize well when applied to different datasets, specifically smaller ones? Moreover, when the dataset's size is reduced, can the model maintain its performance and possibly outperform comparative methods?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, the paper discusses the possible limitation of the current work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: I think the main weakness of the method is the use of pre-trained models ... the overall novelty of the paper might not be very high.
A1: MindEye relies on external state-of-the-art models that were trained on billions of image and text data samples. Critically, none of these existing models were trained with brain data and NSD provides less than 30,000 training fMRI samples per participant, which is orders of magnitude fewer data points than used for models like CLIP or Versatile Diffusion. Part of the novelty of our approach is to leverage models like CLIP that were trained with massive datasets as a teacher to guide the training of our brain models where we have a relative scarcity of data.
> Q2: The presented method is quite large which makes it hard for building it with restricted resources. What about the computational complexity ...
A2: Even though the parameter count for MindEye is high, MindEye (including both high- and low-level pipelines) can be trained on a single A100 in less than 18 hours. This efficiency is due to the bulk of the parameters stemming from MLPs, which are faster to compute than transformers or CNNs. We now mention these details in the “Diffusion Prior” section.
Regarding image reconstruction, our generative model is pre-trained. Any off-the-shelf image generation method that accepts CLIP embeddings can be used with MindEye, meaning more efficient models can be swapped in depending on computational cost requirements.
Also note that at inference time, our diffusion prior (which is trained from scratch) can be dropped if only retrieval is needed, and as stated in Appendix A.2.1, our diffusion prior is faster than the DALLE-2 diffusion prior because we use 100 timesteps instead of 1000, and we modified its architecture to no longer have learnable queries and to instead directly predict denoised CLIP embeddings.
Regarding comparison of computational demands with other reconstruction papers, we have added a new table to Appendix A.10 (see pdf attached to global response) comparing MindEye’s parameter counts with other methods. Lin et al. (2022) use 2 CNN mapper networks of size 1.17M each and also finetune a Lafite based StyleGAN. Takagi et al. (2022) use a linear model with 450M params for their high level pipeline and a linear model with 37M params for low level. Ozcelik et al. (2023) use 257 individual linear models of 12M params each for their high level pipeline and a linear model with 1.45B params for low level. In contrast, MindEye uses an MLP + diffusion prior approach totaling 996M params for the high level pipeline and an MLP with 206M params for our low level pipeline.
> Q3: Could the model generalize well when applied to different datasets, specifically smaller ones? I wonder how the proposed model manages the domain shift ...
A3: As an initial exploration into the importance of dataset size to MindEye performance, we now report performance for Subject 1 using models trained on reduced subsets of the complete training data in Appendix A.9 (attached pdf to our global response).
These results show that even with half the training samples, MindEye still achieves state-of-the-art retrieval performance and competitive reconstruction performance. Even with just 500 training image samples (less than 6% of the full training data), results remained competitive with previous models (albeit no longer state-of-the-art), as shown via the “2-Sessions” model evaluations.
This suggests that MindEye is viable for datasets with much less data than NSD, although future work is needed to directly test performance of this approach to new datasets.
Regarding the topic of domain shift, a limitation that we discuss in the paper and which will be the focus of future work is that all MindEye models are subject-specific, meaning that we cannot train a model on NSD for Subject 1 and then evaluate that model on other subjects or non-NSD datasets. One reason for this is that each person has a differently shaped / sized brain and thus different numbers of voxels, resulting in different input dimensionalities and potential misalignment across voxels. Another reason is that different people have had different life experiences, so the functional organization of visual concepts in their brains could be different even if the brains are structurally aligned. Training on NSD and evaluating on non-NSD datasets would require that the same subject was likewise scanned in the separate fMRI dataset.
> Q4: What about potential implications of training the overarching concept using a more compact architecture with fewer parameters?
A4: Regarding the concern of overfitting with large models, note that our current MindEye model does not overfit despite its large size thanks to our training techniques. That said, we agree it is worthwhile to explore how more compact architectures could be used to achieve similar or improved model performance with reduced parameter count.
In our current approach, we predict all CLIP embeddings (size 257x768) from a hidden representation of size 4096, using a linear layer. This layer alone accounts for 808M parameters out of the 940M parameters in our backbone. Our initial attempts to reduce the size of this layer performed worse than the simple linear layer.
In future works, we will explore other methods to reduce the size of this layer by compressing the information in the 257 CLIP-image tokens into a smaller number of tokens. Another possible direction could be to reduce the size of the CLIP embedding dimension using PCA (see Ramesh et al., 2020). We now mention this future direction in the Conclusions section of the paper. Note that we cannot simply use the CLS token (size 1x768) as it does not contain all the necessary information (especially low-level image information) about the image. This is demonstrated by the inferior retrieval scores from the CLS token variant of MindEye as shown in Table 2.
---
Rebuttal 2:
Comment: While I appreciate the author's thoughtful rebuttal, I maintain my original score for the following reasons:
- "Each person has a differently shaped/sized brain and thus different numbers of voxels." There are several standardization methods available to unify the number of voxels. For further evaluation of this method, one possible approach could be to use a surface map, which has been employed in some studies to standardize differently shaped brains.
- "Part of the novelty of our approach is to leverage models like CLIP that were trained with massive datasets as a teacher to guide the training of our brain models." While it's true that using a model trained on a massive dataset adds an element of novelty, I believe it also makes the innovation incremental and limits the model's application to scenarios where a pre-trained model is available. Conducting between-subject experiments could help us understand whether the model is capable of generalizing well across different domains.
For these reasons, I stand by my initial assessment and disagree with some of the points raised in the responses. | Summary: The paper improves the fMRI reconstruction method using contrastive learning strategy and diffusion prior model. The concept is relative simple but the details of the proposed method, which is the key to make difference, is well-implemented. First, the BiMixCo implements a contrastive loss between the fMRI voxel representation and the CLIP image representation with the utilization of mixup augmentation of the fMRI vectors. Second, the Diffusion prior is implemented on the fMRI representation after the MLP backbone (which is the input to the MLP projector of which output is the input the contrastive loss) for the generating the representation for reconstructing the image. Moreover, the paper implements additional low-level pipeline based on VAE and Stable Diffusion so that an initialization of reconstruction can be attained. The overall reconstruction and retrieval performance significantly outperforms the baselines with large margin, particularly for the retrieval performance.
Strengths: - While the problem is not new and the proposed methods are clever combinations of existing techniques rather than brand new ones, the overall fMRI-based reconstruction and retrieval performances are impressive.
- Both retrieval and reconstruction results are presented with thorough ablation studies to justify the modeling choice.
- The visualization results are compelling and convincing.
Weaknesses: - The performance gaps between MindEye and other baselines are vast while those for reconstructions are not as much. It seems like the reconstruction results for the baselines are copied from the original papers, while the retrieval results are reproduces..Is that correct? What is the main reason for such huge difference? The representations used for reconstruction and retrieval should not be that different, so such a wide gap is little mysterious. In case the retrieval results for the baselines are reproduces, why not also reproduce the results for the reconstructions?
- The effect of BiMixCo is not very convincing since as shown in Table 4, it seems to be helpful only for the retrieval task while not as much for the reconstruction task. Would there be some additional justification?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see my comments in Weakness.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: - The exact process for the evaluation for the retrieval task is not very clearly described.
- The performance gap between the reconstruction and retrieval is not clearly explained.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: The performance gap between the reconstruction and retrieval is not clearly explained. The performance gaps between MindEye and other baselines are vast while those for reconstructions are not as much. It seems like the reconstruction results for the baselines are copied from the original papers, while the retrieval results are reproduces..Is that correct? What is the main reason for such huge difference? The representations used for reconstruction and retrieval should not be that different, so such a wide gap is little mysterious. In case the retrieval results for the baselines are reproduces, why not also reproduce the results for the reconstructions?
A1: The related papers all shared the goal of reconstructing images from brain activity, not retrieving images. The one exception is Lin et al. (2022), which did report retrieval performance in their paper which is shown in our Table 1 (aka this is copied from their paper, not reproduced). We had to reproduce the retrieval results for Ozcelik et al. because the original paper reported reconstruction performance but not retrieval performance (this is mentioned in Appendix A.5).
Therefore, one explanation for the discrepancy between our huge improvement to retrieval performance but only moderate improvement to reconstruction performance compared to previous work is that the authors of other papers were never aiming to improve retrieval performance. That is, other work never tried to decouple retrieval and reconstruction objectives.
To get such high retrieval performance we engineered separate retrieval and reconstruction submodules. This is necessary because the objective for retrieval performance is distinct from the objective for reconstruction: minimizing cosine similarity between paired samples does not translate to minimizing mean squared error of image latents. We discuss our evidence that these objectives trade-off with each other in lines 129-130 and 239-240 of the paper (the diffusion prior’s role cannot be fulfilled by simply adding MSE loss to the MLP projector, and using both contrastive and MSE losses to the MLP backbone does not work well). In other words, representations used for reconstruction and retrieval actually are expected to be quite different.
> Q2: The effect of BiMixCo is not very convincing since as shown in Table 4, it seems to be helpful only for the retrieval task while not as much for the reconstruction task. Would there be some additional justification?
A2: We expect the drop in reconstruction performance due to BiMixCo to be because of how mixup generates synthetic datapoints through linear interpolation. For the retrieval task, we do not need to mix the targets as the contrastive objective optimizes the relative distance of the model predictions with positive and negative samples. However, for the reconstruction task we need absolute targets for the mixed inputs. We generated new targets by mixing the original targets in the same ratio as the mixup inputs. This causes a slight shift in the distributions of target embeddings at train time, leading to a drop in test time performance. We now clarify this in the second-to-last paragraph of the “Contrastive Learning” section. Relatedly, other recent works (Liu & Wang, 2023; Yu, Wang, & Wu, 2021) have also shown that stopping mixup after a certain number of epochs improves performance by reducing the train test disparity (this is mentioned in L107 in the paper).
Also note that BiMixCo as a loss is only applied to the retrieval submodule. It doesn’t have a direct effect on the diffusion prior, except through the common MLP backbone and the slightly altered target distribution. We observe that BiMixCo gives the highest retrieval performance but slightly hurts reconstructions (Table 4). Our final schedule combining BiMixCo + SoftCLIP losses strikes the best balance between retrieval and reconstruction performance in a single model.
> Q3: The exact process for the evaluation for the retrieval task is not very clearly described.
A3: We have now updated the text to more clearly describe the retrieval evaluation process:
“We followed the same procedure as Lin et al. [11] for calculating the retrieval metrics reported in Table 1. Brain retrieval performance was calculated according to the following procedure: for each test image, the image is converted to a CLIP image embedding and we compute the cosine similarity to both its respective ground truth disjointed CLIP fMRI embedding as well as 299 other randomly selected disjointed CLIP fMRI embeddings in the test set. For each test sample, success is determined if the cosine similarity is greatest between the ground truth CLIP embedding and its respective fMRI embedding (aka top-1 retrieval performance, chance=1/300). We average retrieval performance across all test samples and repeat the entire process 30 times to account for the variability in random sampling of batches. For image retrieval, the same procedure is used except image and brain samples are flipped such that the goal is to find the corresponding paired CLIP image embedding out of 300 possible CLIP embeddings in the batch. Lin et al. [11] refer to image retrieval as “forward retrieval” and brain retrieval as “backward retrieval” in their paper.”
For context, the above paragraph will replace the current explanation on page 5 that begins with "To compare our retrieval performance to other papers we average ..."
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal and I will keep my original rating "Accept". | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their thorough comments, thoughts, and suggestions on our manuscript. We have done our best to answer all questions and concerns. We summarize some of the core revisions and clarifications below.
* New Appendix A.9 (see attached pdf to this response) table shows how MindEye performs given reduced subsets of the complete training data. Competitive performance was still observed even when training with less than 6% of the full training data. We also discuss how future work should explore generalization to other subjects and datasets and how this is not straightforward to immediately implement due to the nature of working with brain samples, which are intrinsically unique to the individual.
* Clarification on the novelties of our paper, especially in regards to the concern that there was a lack of novelty due to the use of large-scale pre-trained models. We note that Natural Scenes Dataset contains less than 30,000 training samples compared to the billions of samples used to train CLIP and Versatile Diffusion, and that figuring out the best way to leverage these larger models as teachers to guide the training of our brain models with relative scarcity of data is one of the novelties of this paper.
* Clarification that our diffusion prior is actually trained from scratch, and is much faster during inference than the DALLE-2 diffusion prior. This is because we only use 100 timesteps instead of 1000 timesteps, and we also implemented architectural changes (Appendix A.2.1), including no learnable queries and direct prediction of denoised CLIP embeddings, which allow the diffusion prior to be performed on a single A100. We also clarify that MindEye can work with any off-the-shelf image generation method that accepts CLIP embeddings depending on computational cost requirements.
* Description of the computational resources used for training MindEye and comparison of model size with other reconstruction approaches (see new Appendix A.10 in attached pdf). Notably, we mention how MindEye, despite having a very large parameter count, is actually quite computationally efficient due to the majority of parameters stemming from linear layers (MLPs are more computationally efficient than transformers or CNNs). MindEye can be fully trained on a single A100 in less than 18 hours.
* We elaborate on the theoretical motivation for aligning disjointed CLIP embeddings by referring to the “modality gap” geometric phenomenon. This describes the underlying reason behind why contrastive learning produces disjointed embeddings and thus motivates our use of a diffusion prior for alignment.
Because a few reviewers brought up the concern of novelty, below we summarize the novel advances put forth by our paper (:
1. Our novel implementation of separate submodules within a single model was shown to be critical for attaining simultaneous state-of-the-art reconstruction and retrieval metrics.
2. Contrary to common expectations within the neuroimaging community, using a deep MLP with a parameter count orders of magnitude higher than the number of training samples did not produce overfitting and instead benefitted model performance.
3. We introduce a novel bidirectional version of mixup contrastive data augmentation that seems to work very well in our low sample setting.
4. Mapping voxels to Stable Diffusion’s VAE latent space produces state-of-the-art image reconstructions in terms of low-level image metrics.
5. We are the first paper to attempt large-scale image retrieval (using LAION-5B) from fMRI brain data inputs.
Pdf: /pdf/b316e98234e321a0569d2cda7962316da9156013.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Break It Down: Evidence for Structural Compositionality in Neural Networks | Accept (spotlight) | Summary: This paper studies the structural compositional problem is a novel perspective. It first defines structural compositionality as the extent to which neural networks decompose a complex task into a series of subroutines and implement them modularly. Then, the paper designs several clever experiments to show that many models indeed implement subroutines in their subnetworks, which demystifies how the compositional generalization ability might occur in deep neural networks. Although the paper could be further improved in the following directions (see the limitation part), I think the current version already meets the criteria of NeurIPS and can inspire the community a lot. So I would give an accepting score to the current version. In summary, the paper is easy to follow and quite novel to me. I enjoy reading it a lot.
Strengths: 1. Rather than the downstream generalization ability or the data attributes, the paper focuses on the composition of different rules (functions, or subroutines), which is quite novel and persuasive.
2. The experimental designs are quite ingenious and persuasive to me, it relates the subroutines to subnetworks using the mask function on each parameter.
3. The paper investigates different model architectures (ResNet, ViT, BERT) and different input signals (image and language).
Weaknesses: See the limitation part.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. For the non-compositional solution panel in Figure 1, it might be better to draw class A and class B in the upper right and lower left corners respectively. Then, adding a decision boundary (i.e., the diagonal) would be helpful.
2. It is a little hard for me to understand what the bottom-right panel of Figure 1 is talking about before reading the experimental part.
3. Seems that the filling color of the four panels in the bottom-right part of Figure 2 is wrong. IIUC, the color for the ablation model’s performance would be the inverse of those for the subnetwork.
4. In section 7, the authors mentioned that ViT fails on the proposed problem, is there any possible explanation?
5. For Figure 4, adding the legends for *** and gray dots would be helpful.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I believe solving the following two concerns could make the paper stronger and bring more insights to the community (it is also fine to consider them in future works).
1. The paper selects the subnetwork by training a mask function $m$ and generates two subnetworks, i.e., $Sub_i$ and $M_{ablate}$, based on it. Then it might be interesting to draw the mask for different subnets (keeping the structure of the model) to see if there exist any interesting patterns. For example, it is possible that these two complementary networks are using different halves of the neurons across all layers; or they might share some lower layers but split apart in the higher layers. I guess these results might be quite helpful for us to understand HOW different subroutines are stored in different model architectures. (Maybe whether the model is pretrained also influences this split.)
2. Besides the related works mentioned in the paper, there is also another line of works discussing how compositional mapping emerges in human language and neural network representations. It is named iterated learning, which is first proposed in the field of language evolution to explain why modern human language is so structural [1] and then extended to the deep learning conditions, e.g., emergent communication in [2] and visual question-answering in [3]. It might be interesting to consider the relationships between these works.
[1] Kirby, Simon, et al. "Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language." PNAS 2008
[2] Ren, Yi, et al. "Compositional languages emerge in a neural iterated learning model." ICLR 2020
[3] Vani, Ankit, et al. "Iterated learning for emergent systematicity in VQA." ICLR 2021
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your excellent feedback and questions about our work! We will edit the figures for clarity in the final version of the paper. We agree that a figure describing how subnetworks are distributed throughout a model would be a valuable addition to the paper, and are working on creating such a diagram for the final version. Figure 18 in the supplementary material contains this information for one model, but we agree that a better version of this figure should appear in the main text. Also, thank you for the references to the Iterated Learning literature - they are very cool and relevant! We will add a section to our Related Work describing how iterated learning can be used to encourage compositional behavior in neural networks.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Thank you for your response and for accepting my suggestion. I am looking forward to seeing the final version. This is really a cool paper. | Summary: The paper investigates the concept of structural compositionality in neural networks - subnetworks that implement specific subroutines, such as computing the syntactic number of specific words in a sentence.
Strengths: 1. Conceptual modularity in neural networks has been an idea that has been studied and structurally implemented across various domains: explainability, reasoning, efficiency. The authors advance the field of study of modularity in neural networks by proposing the idea of structural compositionality, which is both is very well thought out and explained. The experimental plan is also carefully controlled to ensure causal observations regarding the existence of structural compositionality in networks across both vision and language tasks. Overall this is a very well done study.
Weaknesses: No major waknesses in my opinion
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: **Important Related Work:** While the authors mention the field of mechanistic interpretability, they fail to mention some key papers in this field that explore similar ideas as the authors Chugtai et al [1] reverse engineer small neural networks to show that they learn to implement group compositions for any finite group using partical novel algorithm predicted from theory. Nouha et al [2] show that transformers solve compositional tasks by approximating part of the full computational graph as linear sub-graph matching, and provide theoretical proofs on how increasing compositional task complexity would lead to degradation in performance. I think these papers should be referred so that readers will have a more complete picture of compositionality in neural networks.
### References
1. Chughtai, Bilal, Lawrence Chan, and Neel Nanda. "A Toy Model of Universality: Reverse Engineering how Networks Learn Group Operations." (2023).
2. Dziri, Nouha, et al. "Faith and Fate: Limits of Transformers on Compositionality." arXiv preprint arXiv:2305.18654 (2023).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors do a good job at highlighting the limitations (e.g. specifying subroutines in advance) of their work. I do not particularly see potential negative social impacts of this work as it mostly pertains to explainability of models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments on our work! The related works that you’ve referenced are extremely interesting, and will strengthen our discussion - thanks for pointing them out! We will add a new section to our discussion that describes the relationship between structural compositionality and the forms of compositionality that are discussed in these papers.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for taking the time to write a rebuttal, and accepting my suggestion to include the additional references and discuss them wrt your contributions. I stand by my original rating, and look forward to (hopefully) seeing more work on mechanistic interpretability and structural compositionality based on this paper. | Summary: This research paper explores the concept of compositionality in neural networks, a contentious topic in the field of AI. Compositionality, which is a defining feature of human cognition, allows for abstract and flexible processing of language and visuals. The debate lies in whether neural networks need explicit symbolic systems to implement compositional solutions, or if these solutions can be implicitly learned during training. This research introduces "structural compositionality", which gauges the degree to which neural networks can break down compositional tasks into subroutines and implement them in a modular fashion. Surprisingly, the study found evidence that many models do implement subroutines in modular subnetworks. Additionally, it was found that unsupervised pretraining leads to a more consistently compositional structure in language models. This research contributes to mechanistic interpretability, helping explain the algorithms that neural networks implement in their weights, using techniques from model pruning.
Strengths: The paper is quite interesting, timely and I think that most people would find the results surprising. The area of mechanistic interpretability is very important these days. One of the strongest points of this paper is the elegant approach it is taking, in terms of experimental design, to construct compositional tasks and to discover whether the presence of such "functional compositionality" also results in specific subnetworks for each "concept". The paper presents results both for images and for language experiments. The results are clear and the presentation of the paper is very good.
Weaknesses: Of course, one general weakness is that the main result of the paper is through empirical results. The reader may not be convinced whether the claims of the paper are actually true in general -- or whether they are true only in some architectures and tasks.
Another major weakness, which the authors also identify as a limitation of their study, is that the one must know which subroutines to look for.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: A question for the authors to think about: what is the role of pruning (or network density, more generally) in the emergence of structural compositionality? Would you expect that emergence to be equally likely in a dense network and in a sparse network that has resulted through pruning?
Another question to think about: what is the role of the network's depth in the emergence of structural compositionality? Functional compositionality can have its own depth. How is that functional depth relate to the depth of the network itself?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The paper is honest about its limitations. The important of them is that one has to know in advance which "subroutines" (I do not like this term actually -- maybe "concepts" or "subfunctions" would be better terns) to look for. This limitation however may be addressed by other work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your excellent feedback and questions about our work! We agree that the current work is a purely empirical study, and so we attempted to demonstrate the effect on various architectures and domains to help convince the reader that structural compositionality is a property of a wide class of models. That said, since submitting (and even more so after your comments), we have been investigating possible theoretical work that might relate to our findings. Particularly interesting is this recent paper on compositionality and sparse representations in neural networks (Poggio 2023, https://cbmm.mit.edu/sites/default/files/publications/Theoretical_Framework__How_Deep_Nets_May_Work_24.pdf). We will add a new section to our discussion to outline the ways in which our results are consistent with such theoretical work, and point towards possible future experiments that could better connect the two.
Your question regarding how sparsity interacts with structural compositionality is very interesting! We will run an experiment during the discussion period to investigate this, and update leave a comment on OpenReview with the results next week. We will also add these results to the Appendix.
---
Rebuttal Comment 1.1:
Comment: thanks for the thoughtful rebuttal. After considering the other reviews too, I have decided to stay with my original score.
---
Reply to Comment 1.1.1:
Comment: Thanks! Just to follow up regarding the sparsity experiment: we pruned each Resnet50 used in our Number-Contact vision experiments, and then reran the experiment in an identical manner on the pruned models. We do not observe any salient differences between these results and our original results. In particular, the network seems to exhibit structural compositionality for Contact, but not for Number, just like in the original experiment. We will be sure to add this experiment to the appendix! | Summary: The authors investigated to what extent the standard neural networks of the present day trained on the tasks solvable by composing subroutines result in modular structures reflecting the tasks' compositional nature (called structural compositionality in this study). To answer this question, they took the following approach on the trained networks: 1) use a model pruning technique to find the best subnetworks for individual subroutines, then 2) check the patterns in accuracy difference of the discovered subnetworks and the ablated networks (complemental subnetworks with respect to the first ones) on the tasks corresponding to individual subroutines. They conducted experiments on both vision and language tasks with multiple neural networks and pretraining regimes. Based on the results, they concluded that the neural networks oftentimes exhibit structural compositionality even when there are no explicit inductive biases toward such solutions.
Strengths: Making a learning machine that has compositional (systematic) generalization capability like a human is an important goal yet to be achieved in the field of artificial intelligence. On one hand, standard neural networks that are not explicitly imposed inductive biases for compositionality can generally be said to fail in compositional generalization as stated in lines 56 - 57, but on the other hand, these networks show compositional generalization capability to some extent in some cases, although not perfect and inferior to the ones that are imposed inductive biases. Revealing what is going on inside a neural network is an important research topic in this context, and the approach and results of this study will be of interest to the NeurIPS audience.
The idea of the approach took in this study is clear and reasonable. The model pruning method, experimental logic and concrete experiments are explained fairly well (some part is given in the Appendix). The experiments are fairly rich in terms of tasks (both vision and language tasks were conducted and there are varieties in each) and models (ResNet50, WideResNet50, and ViT were tried for visual tasks, and BERT-small was tried for language tasks).
Weaknesses: 1. The URL of an Anonymous GitHub repository is provided in the paper (footnote on page 2, https://anonymous.4open.science/r/Compositional_Subnetworks-C8AB/). However, it is very hard to access the contents of the repo, because, although the repo consists of many subdirectories, there is no README in the top-level directory giving an overview. This cast a shadow on the reproducibility.
1. Another relatively weak point of this paper is the contextualization relative to Csordás et al. 2021. In lines 49 and 259, Csordás et al. 2021 is explained as a work merely on a multitask setting, but it studied compositional (systematic) generalization settings using the SCAN (Lake and Baroni 2018) and the Mathematics Dataset (Saxton et al. 2019).
Csordás, R., van Steenkiste, S., and Schmidhuber, J. Are neural nets modular? Inspecting functional modularity through differentiable weight masks. In Proc. of the 9th International Conference on Learning Representations (ICLR), 2021.
Lake, B. M. and Baroni, M. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Proc. of the 35th International Conference on Machine Learning (ICML), pp. 2873--2882, 2018.
Saxton, D., Grefenstette, E., Hill, F., and Kohli, . Analysing mathematical reasoning abilities of neural models. In Proc. of the 7th International Conference on Learning Representations (ICLR), 2019.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Major Suggestions
1. I suggest adding README explaining the overall repository in the top-level directory during the Author Rebuttal period.
1. I suggest better contextualizing this work with respect to Csordás et al. 2021 in the paper.
Please also refer to Weaknesses section above.
Minor Sugestions
1. It would be better to mention WideResNet in Section 7 (Results) and point to Appendix.
1. It would be nice if applicability of the proposed approach to recurrent neural networks are explained in the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors describes the limitations of their current method and the results reported in the paper in Section 10 (Discussion):
1. Their current method requires one to specify which subroutines to look for in advance.
1. Their current method requires one to use causal ablations and control models to properly interpret.
1. The reported results do not contain any analysis on the relationship between structural compositionality and compositional generalization.
\# Personally, I am very interested in 3, which is fully left for the future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your excellent feedback on our work! We have updated the README in the anonymous repository. We agree that it is valuable to add an extended discussion of Csordas et al. 2021 in this paper, and we will include a full paragraph in our related work section to elaborate on the relationship between our contribution and theirs. In particular, we will note how their work convincingly demonstrates that the same underlying algorithm is not being used in different data partitions, which implies that the network is behaving non-compositionally. One interesting difference between their study and ours is that our study explicitly looks for the compositional subroutines that a model might be implementing, whereas Csordas et al. looks for subnetworks that solve subsets of the dataset. Also, we will certainly comment on the WideResNet results in the main body, thanks for pointing that out!
---
Rebuttal Comment 1.1:
Comment: Thank you very much for dealing with my suggestions. Here are two follow-up comments.
* Regarding the major suggestion 2, can you show the draft of the paragraph regarding Csordas et al. 2021 here?
* Can you provide any comments about the applicability (extensibility) to recurrent networks? (Related to the minor suggestion 2. This is minor, and it is completely OK if it is currently unclear. Please just tell me so if it is the case. I'm just curious. )
\# The major suggestion 1 has been completely addressed, thank you.
---
Reply to Comment 1.1.1:
Comment: No problem! Here is a draft of the Csordas et al. 2021 paragraph. We welcome any feedback that you might have on it!
"Most directly related to the present study is Csordas et al. 2021, which also analyzes modularity within neural networks using learned binary masks. Their study also finds evidence of modular subnetworks within a multitask network: Within a network trained to perform addition and multiplication, different subnetworks arise for each operation. Csordas et al. 2021 also investigates whether the subnetworks are reused in a variety of contexts, and find that they are not. In particular, they demonstrate that subnetworks that solve particular partitions of the SCAN or Mathematics dataset oftentimes do not generalize to other partitions. From this, they conclude that neural networks do not flexibly combine subroutines in a manner that would enable full compositional generalization.
However, their work did not attempt to uncover subnetworks that implement specific compositional subroutines within these compositional tasks. For example, they did not attempt to find a subnetwork that implements the "repeat" operation for SCAN, transforming "jump twice" into "JUMP JUMP". Our work does attempt to find such compositional subroutines (e.g. a subroutine that implements "inside" or "contact"), and finds that these subroutines are often represented by modular subnetworks. This finding extends Csordas et al.'s result on a simple multitask setting to more complex compositional vision and language settings, and probes for subroutines that represent intermediate computations in a compositional task (i.e. "inside" is a constituent computation when computing "Inside-Contact"), rather than full solutions to particular tasks in a multitask setting (i.e. the "addition" subroutine provides a complete answer when the input specifies that the network must perform addition)."
With respect to your question about recurrent networks: It is currently unclear to us whether recurrent networks would exhibit more/less/the same structural compositionality as feedforward networks! This is a very interesting question, and we will note it as an exciting direction for future work. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DäRF: Boosting Radiance Fields from Sparse Input Views with Monocular Depth Adaptation | Accept (poster) | Summary: The paper proposes a method to better use monocular depth in a few-shot NeRF setup. There are mainly two technical contributions to me:
1. Applying mono depth constraint to unseen view;
2. Un-distorting (scale and shift) monocular depth in a per-patch manner, rather than per-image.
---
**After rebuttal**: I have read authors' rebuttal and it addresses my concerns.
Strengths: 1. The motivation makes a lot of sense. Monocular depth estimation suffers from various distortions, and un-distorting it with a global scale and shift per image is only a rough approximation. Exploring better ways to un-distort monocular depth is definitely valid motivation.
2. It’s interesting to see that monocular depth networks actually perform decently on rendered images from half-trained NeRF models.
Weaknesses: General
1. L118-L120: I disagree with this definition of few-shot, i.e. $| S < 20 |$. The definition should be based on view-angle and scene coverage, not on an absolute number of images. For example, in LLFF, a forward-facing scene, NeRF can be trained very well for $| S < 20 |$, because 20 images cover the simple forward-facing scenes very well. In contrast, 50 images might be challenging for 360-degree scenes.
2. Following point 1, how do the few-shot views cover the evaluated scenes? It would be good to have a top-view visualisation of selected cameras' positions and orientations.
3. How many patches are used in training? From the supp mat I can see the patch size is 64x64, i.e. 4096 rays. This means there is only one patch used during training? In this case, modelling patch-wise scale/shift is same as image-wise scale/shift?
Writing
1. Figure 1 caption: unclear, seems like should be … by _applying_ pretrained MDE … (missing a word _applying_)
2. Figure 2:
1. what is the input RGB and its monocular depth? I can see this is a top-view image of back-projected point clouds, but I cannot imagine what’s the input.
2. What is colour coding?
3. I don’t see why Fig 2b is better than Fig 2a from this image. I understand the motivation and it is supposed to be better when un-distort with patch-wise scale/shift, but I cannot see that from this visualisation.
3. Sec 4.1 is kind of repeating intro.
4. Symbol $M_l$ is undefined. I suppose it’s similar to $M_i$, but projected in $l \rightarrow i$ direction?
5. Symbol $s_i, t_i$ are undefined.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1. Specific definition of few-shot setting and its visualization**
>
We agree with your statement that the precise definition of few-shot cannot be defined naively as |S<20|, described in our L118-L120. As you have said, its definition should be based on view-angle and general coverage of the scene. However, due to this very reason, it is difficult to precisely define the minimum number of viewpoints required for few-shot reconstruction, as that number will vary scene by scene according to each scene’s specific shape, geometry, and occlusions.
In this light, current few-shot NeRF works [1,2,3,4] assume extreme cases where i) the known viewpoints overlap with each other minimally, ii) all directions of the scene are viewed at least once so that there are no regions of the scene where no information is given. Such extreme scenarios cause original NeRF models to struggle heavily with geometric reasoning so that they cannot perform proper reconstruction, which necessitates the introduction of few-shot regularization methods.
Considering these criteria and the general characteristics of the scenes in the dataset, they decide upon a certain number of viewpoints that result in such extreme setting per dataset. For example, the forward-facing *LLFF dataset* selects equally distant 3 views for few-shot setting, and *NeRF-synthetic dataset*, despite its 360 degree viewpoint, also assumes 3 views as its few-shot setting considering its simplicity.
In this manner, we constructed few-shot train/test sets of Tanks and Temples dataset, where few-shot standard does not yet exist, considering its complexity, 360 degrees, real-world dataset. Following your constructive suggestion, we visualize how these few viewpoints cover our scenes in a minimal yet sufficient manner in Figure 1 of our attached pdf file. We believe this figure will be beneficial in justifying our few-shot setting to the readers, and we promise to include it in the final version of our paper. Thank you for the helpful comment.
> **Q2. Number of patches used in training**
>
We clarify that we sample each patch per iteration, and each patch’s scale/shift value does not equate to global image-wise scale/shift values.
More specifically, we sample a 64 by 64 patch and a 128 by 128 patch per iteration, but these patches are randomly sampled from a random seen pose and a random unseen pose each other, and it is this corresponding patch-wise region that we regularize with predicted MDE depth. In every iteration, both viewpoint and patch locations are randomly sampled a new, and the patch-wise scale-shift values are modeled accordingly. Since the viewpoints and patches are sampled uniformly, our regularization signal gives coverage of every possible region of the scene throughout the course of optimization. In this way, the local supervision signal influences all regions of the scene as optimization progresses.
> **Q3. Details of Figure 2**
>
Thank you for your constructive comment. We agree that Figure 2 is difficult to recognize at first sight, and we promise to fix it.
Our intention for Figure 2 was to visualize our point that a single scale-shift value cannot perfectly fit into ground truth geometry due to the ambiguity of distances between object instances. For this reason, we have visualized **error** of a predicted point cloud of a room from the bird’s eye view to show that patch-wise fitting most accurately fits point cloud to ground truth geometry. This is done with jet color coding, so that red color means large error and blue color means small error. This point cloud is projected depth from the input image from the viewpoint stated as red camera.
Following your comment, we promise to either replace it with Figure 1 of our supplementary material (which describes the same phenomenon more clearly) or reinforce it with an additional caption that explains it in a more detailed manner.
> **Q4. Clarifications in writing**
>
Thank you for pointing these out. We promise to make a modification to the caption of Figure 1 and Section 4.1 in the final version of our paper.
Also, we clarify that $M_l$ shares a similar definition with $M_i$, which we define as a confidence mask for seen viewpoint $i$ - therefore, $M_l$ corresponds to a confidence mask for unseen viewpoint $l$. Also, the notation $s_i,t_i$ in supplementary materials is a notation mistake: it is supposed to be $w_i, q_i$. Thank you for the careful reading and revision of our paper: we promise to clarify & revise these notations accordingly in the final version of our paper.
[1] Jain et al., Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis, ICCV 2021.
[2] Kim et al., InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering, CVPR 2022.
[3] Niemeyer et al., RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs, CVPR 2022.
[4] Yang et al., FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization, CVPR 2023.
---
Rebuttal Comment 1.1:
Title: Thanks for the additional results and detailed response
Comment: The idea of adapting monocular depth to novel view images that are still under training is interesting to me. The new response addresses most of my questions and concerns, also Fig. 1 in the rebuttal pdf is very helpful so I suggest including it in supp mat later. Overall I'll raise my rating in the final review.
Just one quick clarification regarding the patch size: I understand the patches are randomly sampled in each iteration, but is the patch cropped from an image or is a $64 \times 64$ patch formed by randomly sampling 4096 pixels from an image and reshaping to $64 \times 64$?
---
Reply to Comment 1.1.1:
Title: Thanks and more clarification
Comment: Dear Reviewer VhkU,
Thank you for your encouraging feedback and for increasing your score. We will add Fig. 1 in the rebuttal pdf in the final version of paper.
For further clarification, a 64×64 patch is cropped in a patch-wise manner from an image of a randomly selected camera pose. Please let us know if you have remaining questions or concerns.
Regards, Authors | Summary: The paper tackles the problem of sparse view NeRF reconstruction by leveraging on monocular depth estimation networks as a prior. The main difference between DaRF and existing work is it also computes for a depth loss on unseen views, in contrast to prior works that only constrain depth on the training views. Moreover, they also adapt, i.e. fine tune, the monocular depth estimation network to agree with the depth produced by the NeRF. They also introduce a confidence term that determines which pixels to use the depth loss on. They conduct experiments on Scannet and Tanks and Temples dataset.
Strengths: The paper proposes to use monocular depth estimation as a prior to constrain NeRF optimization under the sparse view regime. Different from existing works, DaRF found that they can also use the monocular depth prior on unseen views, and even on NeRF rendered depth on early stage of the training, the monocular depth network is able to produce reasonable and cleaner depth maps that can supervise and constrain the NeRF loss (as shown in Figure 3). This is the main contribution and finding of this submission. The authors also showed an ablation study to justify the different components that were introduced.
Weaknesses: The concerns I have for the paper is whether 1) the contribution introduced is enough for paper acceptance, and 2) how it positions itself and it claims in contrast to existing works. For the first point, the main contribution and distinction it has is the finding that the monocular depth prior can also be used to constrain unseen viewpoints. As shown in Figure 3, even on noisy NeRF renderings, the MDE prior can produce clean depth maps. An add-on to this is the slight improvement in results (as shown in the ablation study) by also fine-tuning the MDE network as NeRF is being optimized. However, the way the paper was written and motivated is the ambiguity in monocular depth estimation -- as written in the intro, Sec 4.1 and Sec 4.3, which is the main premise and story of the existing (though somewhat concurrent) work SCADE. The paper did cite SCADE, but however only mentions the difference that DaRF is able to train on unseen views. Differentiating with existing work and clarifying contributions can be improved for both paper presentation as well as the claims made for the submission. Now, the concern and question is whether 1) being the contribution of DaRF is enough as a contribution to meet the bar of Neurips. And for this reason I am giving an initial rating of borderline reject and would like to hear from the other reviewers.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. How are the viewpoints selected for the unseen views that was used for L_{unseen}? How many unseen viewpoints are selected and how is the loss balanced with the seen training views? This in my opinion is an important detail to justify and back its main claim.
2. What is the intuition behind using two terms in Eq 6? Specifically, wouldn't the second addend term be enough? It seems redundant and not too intuitive.
3. What dataset was used for the Ablation study (Table 3)? Is it the average across all scenes in Scannet? Figure 8 shows one example from scene781 of scannet.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors included the limitations in their supplementary material. However, according to the checklist, they claim to include the error bars. However, I don't think I saw error bars reported in the submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1. Contribution of DaRF / comparison with SCADE**
>
Thank you for your comment. While SCADE [1] does have a similar setting to ours (leveraging MDE for few-shot NeRF reconstruction), we would like to emphasize that there are **critical differences** that position our work orthogonal to SCADE.
There is a fundamental difference between our work and SCADE’s approach toward handling ambiguity present within MDE: our work proposes an online, scene-specific adaptation of MDE which **directly adapts it to predict absolute, canonical geometry** in accordance with few-shot NeRF. Our novel local scale/shift alignment also aids this process. On the other hand, SCADE **injects uncertainty into MDE** through additional pretraining so that canonical geometry can be estimated through probabilistic modeling between multiple modes of estimated depths. So, while the ultimate goal regarding our MDEs may be similar (ambiguity removal), the core idea behind them are more or less opposite: our approach **directly removes ambiguity** present in MDE by leveraging canonical geometry captured by few-shot NeRF, while SCADE handles MDE through ***embracing its uncertainties and ambiguities** (as stated in its introduction)* and solving it with a probabilistic approach.
Through this key difference, our approach attains many advantages over SCADE:
1. First, our approach does not require **additional pre-training of MDE** that SCADE requires. Because our online MDE adaptation requires no additional dataset nor training stages that SCADE relied upon, but enables a direct usage of an off-the-shelf MDE for few-shot NeRF, it is simpler, faster, and cheaper in aspects of training time and computation cost.
2. Second, this approach **bypasses SCADE’s process of predicting multiple (20) depths of each input image** for probabilistic modeling, which also costs more training time and computational cost. Since our MDE is adapted in a more precise, scene-specific manner, it only needs to predict a single depth map that can be analytically fitted to NeRF through our **novel patchwise scale-shift fitting**, which is also our major contribution.
3. Our usage of MDE at **unseen viewpoints** for regularization effect and artifact removal, a contribution you pointed out, is, therefore, a natural extension that is possible due to our adapted MDE’s enhanced ability to predict canonical geometry, as noted in Strength 1 of reviewer KpuH.
4. Finally, we point out that this cheaper and faster approach achieves **higher performance** than SCADE.
To summarize, despite having similar objectives, our methodology introduces an orthogonal strategy of leveraging MDE through online training of NeRF for per-scene adaptation. This enables our model to achieve higher performance without SCADE’s MDE pretraining, multiple per-depth predictions, or probabilistic modeling. However, as the two methods are orthogonal, both methods may be able to be used in conjunction for even higher performance and more efficient few-shot NeRF regularization, which would be an exciting direction to expand our research. We are grateful for your comment and suggestion.
> **Q2. Details regarding unseen view selection and loss**
>
We generate an unseen camera pose every iteration, adding noise value sampled between an interval of [-6, +6] to the original Euler rotation angles of randomly selected input view pose. As mentioned in training details in our supplementary materials, the weights for the seen and unseen viewpoint loss are the same.
> **Q3. Regarding intuition behind Eq6 / redundant terms**
>
We clarify that our methodology revolves around the idea of **adapting MDE toward predicting a scene-specific absolute geometry**, which is achieved by the first addend term: this first term forces itself MDE to adapt towards multiview consistency so that its ill-posed nature is reduced and its initial **global depth prediction** grows to be more in accordance with the absolute geometry captured by NeRF.
In contrast, the second addend term, which takes into account patchwise scale-shift fitting, is designed to aid the modeling of **fine, detailed, local geometry** which the model has difficulty modeling without such local fitting.
We emphasize here that the second term alone cannot guide MDE toward multiview consistent geometry, as it solely deals with locally fitted depths whose location and scale have already been largely altered through scale/shift fitting. Therefore, when only the second term is used, MDE has **no incentive to adapt toward canonical geometry at the global prediction level** and only adapts toward local fine details. This leads to a drop in performance, as shown in the results below. When only the second term is used for optimization (scale-shift), it performs worse in every metric than in other cases where only the first term is used (L1) or both are used in conjunction (ours). This justifies our strategy of using both losses as effective and not redundant.
| | PSNR | SSIM | LPIPS | abs_rel | sq_rel | RMSE | log_RMS |
| --- | --- | --- | --- | --- | --- | --- | --- |
| scale-shift | 21.43 | 0.763 | 0.330 | 0.182 | 0.109 | 0.484 | 0.205 |
| l1 | 21.45 | 0.763 | 0.327 | 0.157 | 0.079 | 0.386 | 0.176 |
| ours | **21.58** | **0.765** | **0.325** | **0.151** | **0.071** | **0.356** | **0.168** |
> **Q4. Dataset used for ablation**
>
The quantitative results of the Ablation study in Table 3 contain **average values** of quantitative results yielded by every scene in ScanNet. It is solely for visualization purposes that we use Scene781 of ScanNet as a representative for qualitative results in Figure 8.
> **Q5. Error bars**
>
We apologize that we have made a mistake in checking the checklist, as we do not show error bars in our paper. Thank you for pointing this out.
[1] Uy et al., SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates, CVPR 2023
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses and for addressing the questions raised. I appreciate that a distinction and clarification of the differences with SCADE is brought up in the rebuttal.
My main concern as written in my initial review was the lack of distinction and differentiation with the existing work, SCADE. The contributions claimed in the submission did not differentiate what is novel and what is existing especially compared to the recent work. It was cited, but it did not bring up that it models ambiguity in MDE, which is the concern I raised. I do agree on their differences, and as I also pointed out in the review, the use of MDE on unseen viewpoints is novel.
Following up on this rebuttal, as mentioned, I agree with the point on unseen viewpoints as well as computation time. I don't quite buy the argument on "orthogonality" of the two strategies. On the higher performance than SCADE, I think is less convincing since it was only shown on the scannet dataset in the original setting and the performance gap is marginal. Moreover, the ambiguities that SCADE models seem to do well on non-opaque surfaces as shown in their teaser, which is also the failure case that DaRF shows in the PDF attached in the rebuttal, which makes me think that the claim "higher performance" may not be backed that well. I do acknowledge that the official code at the time of the submission may not have been released. But the lack of comparison together with my original concern in the lack of differentiation in terms of contribution is why I gave the negative rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Ndsp,
Thanks for your reply. First of all, following your advice, we promise to strengthen our work’s comparison against SCADE in the final version of our paper, especially in regard to the MDE ambiguity removal process.
However, we would need further details as to why you do not agree with our differentiation against SCADE in how they are fundamentally different and “orthogonal”: in our rebuttal, we have described our perspective on these differences in detail, such as:
1. How our work adapts (overfits) an MDE to a single specific scene to direct ambiguity removal, while SCADE increases MDE’s ambiguity for general probabilistic modeling.
2. How our work addresses & tackles the problem of MDE depth global scale fitting, in which alignment to one region of the scene inevitably leads to misalignment in many other regions. In this aspect, our additional contributions of patchwise scale-shift alignment fitting, which has proven effective.
3. Increased methodological simplicity and no need for additional MDE pretraining unlike SCADE due to our adapted MDE’s capability to directly predict absolute, canonical geometry.
and we would be very grateful if you could elaborate in more detail on how you view these points are **not** in fact, differences from SCADE and thus lacking novelty. If you have any further questions regarding our methodology, we would be happy to answer them.
Regarding the “marginal” performance of our model's results in comparison to SCADE that you point out, we again emphasize that our approach is orthogonal to SCADE's, achieving competitive results while using an entirely different approach that is notably simpler and more computation-efficient, as you have agreed. Moreover, we show here our additional quantitative result in comparison to SCADE using an in-the-wild dataset, and we will also add in our final paper our additional qualitative results which perform competitively to SCADE in regards to transparent surfaces you have mentioned.
| | PSNR | SSIM | LPIPS |
| --- | --- | --- | --- |
| DDP | 21.28 | 0.727 | 0.366 |
| SCADE | 22.82 | 0.743 | **0.347** |
| Ours | **22.92** | **0.760** | 0.390 |
Lastly, we respectfully emphasize that our work was **concurrent** with SCADE at the time of submission. SCADE was submitted to CVPR 2023 and not yet published at NeurIPS 2023 submission nor was its code revealed. As a result, the experimental comparison was impossible at the time of writing, and its influence on our methodology was minimal. Therefore, we believe our work was under no obligation to put such a strong focus on comparison and differentiation to SCADE in our main paper, but thanks to your constructive comment we could more clearly analyze the methodological contrast between the two methods, and promise to emphasize the differences between the works in our final paper. | Summary: This paper addresses the problem of few-shot NeRF reconstruction. The authors propose using monocular depth estimation (MDE) networks to provide geometry prior to NeRF at both seen and unseen viewpoints. They propose overcoming the ambiguity problems associated with monocular depths by MDE adaption. Experimental results deomonstrate improvements in both rendered novel views and rendered depths.
Strengths: + Unlike previous works which only exploit depth information at seen viewpoints, this work also exploits MDE to constraint NeRF at unseen viewpoints, leading to more robust and coherent NeRF optimization. The authors demonstrate through an example that the strong geometric prior within the MDE model enables it to generate reliable depth from noisy NeRF rendering. This makes MDE at unseen viewpoints feasible.
+ The proposed patch-wise scale-shift fitting helps reducing the impact of erroneous depth differences generated by MDE networks in distilling the monoclular depth prior to the NeRF.
+ Adapting MDE to absolute scene geometry embedded in NeRF further helps to resolve ambiguities in surface curvature and orientation, and improve multiview consistency.
+ The proposed method demonstrates sota results, both qualitatively and quantitatively, on two real-data sets, particularly showing superb results in rendered depths in few-shot NeRF.
+ Ablation study has been included to demonstrate the effectiveness of each major deisgn component.
+ Overall this paper is well-written and well-organized. It is easy to follow.
Weaknesses: - In MDE adaption, it is not clear why the monocular depths predicted from the rendered input views instead of that predicted from the input views are adopted in (6). There is no explanations or disucssions on the effect of choosing between these two. There is also no explanations or discussions on why only input views are considered.
- In confidence modeling, why the predicted depth of a point in an input view (after scaling ana shifting) is directly compared with its rendered depth in a target view in (9)? Note both predicted depth and rendered depth are measured with respect to the viewpoints.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Figure 2 is rather confusing. It is not clear how one should interpret / understand the renderings. More descriptive caption should be provided.
- Are there any failure cases? Any analysis for the causes of failure?
- What is the minimum number of viewpoints for the proposed model to produce reasonable reconstruction?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations and impact are only included in the supplementary material, but not in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1. Details of MDE adaptation**
>
Thank you for pointing this out. Equation 6 has a **notation mistake** on our part, as we do use monocular depths predicted from ground truth input image, $D^*_{i}$, and not the one from depth rendered from NeRF, $\bar{D}^*_{i}$. Our accurate methodology, in which we use the depth predicted with MDE from ground truth input images, is correctly described in Figure 1. We apologize for this mistake and we promise to revise it in the final version of our paper as the following:
$
\mathcal{L}_\text{MDE} = \sum\_{I\_i\in\mathcal{S}} \sum\_{\textbf{p} \in \mathcal{P}} \left(||\texttt{sg} \left(\bar{D}\_i(\textbf{p})\right) - D^*\_i(\textbf{p})|| + ||(w\_i\texttt{sg}\left(\bar{D}\_{i}(\textbf{p})\right)+q\_i) - D^*\_i(\textbf{p}) || \right)
$
We used only input viewpoints for MDE adaptation, since using rendered color patch in unseen viewpoint as supervision might lead MDE to lose its geometric prior due to its noisy rendering results.
> **Q2. Details of Confidence Modeling**
>
Thank you for pointing this out. Your comment is correct, and upon closer inspection, we have made a mistake in our thresholding equation so it does not correctly describe our masking process. We apologize for our mistake, and we are also very thankful for your careful reading and revision of our paper. The correct thresholding equation is as follows:
$
M\_i(\textbf{p}) = \big[\|R\_{i}(w\_i{D}^*\_i(\textbf{p}) + q\_i)K^{-1}\textbf{p} - R\_{l}\bar{D}\_l\textbf{p}')K^{-1}\textbf{p}' \| < \tau\big]
$
Where $R_{i}$ and $R_{l}$ stand for camera-to-world extrinsic matrices that transform the 3D points from camera view spaces to the canonical space. In short, using the predicted depth of viewpoint $i$ and NeRF-rendered depth of target viewpoint $l$, we compare the 3D point acquired from both viewpoints to canonical space by calculating the distance between the two. Only if the distance is lower than the threshold (in agreement with each other), do we take the mask as reliable. Again, thank you for your constructive revision, and we promise to revise this mistake in the final version of our paper.
> **Q3. Details of Figure2**
>
Thank you for your constructive comment. We agree that Figure 2 is difficult to recognize at first sight, and we promise to fix it.
Our intention for Figure 2 was to visualize our point that a single scale-shift value cannot perfectly fit into ground truth geometry due to the ambiguity of distances between object instances. For this reason, we have visualized **error** of a predicted point cloud of a room from the bird’s eye view to show that patch-wise fitting most accurately fits point cloud to ground truth geometry. This is done with jet color coding, so that red color means large error and blue color means small error. This point cloud is projected depth from the input image from the viewpoint stated as red camera.
Following your comment, we promise to either replace it with Figure 1 of our supplementary material (which describes the same phenomenon more clearly) or reinforce it with an additional caption that explains it in a more detailed manner.
> **Q4. Any Failure cases?**
>
For failure cases, there are occasions when our confidence modeling fails and unable to completely filter out erroneous predictions. Also, in out-of-domain cases where MDE fundamentally models depth incorrectly, our model also shows drop in performance as well. Failure case is shown in Figure2 of the attached pdf file, showing the case of wrong depth prediction in the window since neither NeRF and MDE model predict accurate depth of the transparent object.
> **Q5. Minimum number of viewpoints?**
>
We do not analytically calculate the minimum number of viewpoints required for few-shot reconstruction, as that number will vary scene by scene according to each scene’s shape, geometry and occlusions. Instead, like all other few-shot NeRF methods [1,2,3,4], we assume a setting supposed by all other few-shot NeRF methods - an extreme scenario where the known viewpoints hardly overlap with each other yet all directions of the scene are viewed at least once - which is 10 viewpoints in our case. If we reduce the number of viewpoints from here, scene reconstruction will still happen, but it will result unseen directions / regions of the scene where no information is provided whatsoever. In such cases our model cannot perform reconstruction, as it is not a generative model capable of imagining unseen parts and our model is not designed to do so in the first place. Furthermore, visualization of camera setting for few-shot is visualized at Figure 1 of the attached pdf file.
[1] Jain et al., Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis, ICCV 2021.
[2] Kim et al., InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering, CVPR 2022.
[3] Niemeyer et al., RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs, CVPR 2022.
[4] Yang et al., FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization, CVPR 2023.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed all my concerns in their rebuttal. After taking the comments of fellow reviewers into account, I would like to keep my original recommendation of "weak accept". | Summary: The paper presents a new few-shot neural radience field approach based on joint monocular depth adaption. The main idea of the proposed approach is to utilize the monocular depth estimator to improve the geometry prior of NeRF representation. The motivation is reasonable. Also, it presents attractive performance on both indoor and outdoor scenes.
Strengths: 1. The idea of utilizing monocular depth estimator is interesing and the depth estimator can provide reasonable geometry information for the scene reference.
2. The proposed approach has provided attractive performance on the real-world benchmarks. Sufficient ablations have been conducted to validate the design of the proposed algorithm.
3. The presentation of the paper is good.
Weaknesses: 1. The paper is based on SCADE[49]. According the results in Table 1, the proposed approach is lower than [49] on the LPIPS score. Is there any potential explanation of this result? Also, How about the results on the "in-the-wild benchmark" proposed in [49]?
2. How is case when the testing data is out of the distribution of the depth estimator? For example, in the case when the depth estimator failed to predict the accurate depth information.
3. How about the inference speed of the proposed algorithm?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address the comments released in the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have not full discussed the limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1. Comparison with SCADE**
>
Thank you for your comment. First of all, we would like to emphasize that our paper is **not based** on SCADE [1] nor use it as its baseline but instead suggests a method orthogonal to it. There is a fundamental difference between our work and SCADE’s approach toward handling ambiguity present within MDE: our work proposes an online, scene-specific adaptation of MDE which **directly adapts it to predict canonical geometry** in accordance with NeRF under optimization. On the other hand, SCADE **injects uncertainty into MDE** through additional pretraining so that canonical geometry can be estimated through probabilistic modeling between multiple modes of estimated depths. So, while the ultimate goal regarding our usage of MDEs may be similar (ambiguity removal), the core idea behind them can be seen as more or less opposite.
In this light, we explain our potential explanation for lower LPIPS score in Table 1: our work’s baseline, K-planes [2], has been shown to perform slightly weaker in LPIPS than **NeRF-pytorch** [3], which is the baseline for SCADE. We have chosen K-planes as our baseline due to its fast optimization speed and more efficient memory, but due to this trait of baseline, we believe that our model show slightly less performance in LPIPS metric than SCADE, while exceeding it in all other metrics. At the time of our submission, the code for SCADE was not revealed, so we could not directly perform experiments with SCADE or its baseline: now it has been revealed, we bring you quantitative experiment results for accurate comparison.
Here are the results of our model, DäRF, for in-the-wild benchmark proposed in SCADE, which shows that our model is slightly better results than SCADE in two metrics.
| | PSNR | SSIM | LPIPS |
| --- | --- | --- | --- |
| DDP | 21.28 | 0.727 | 0.366 |
| SCADE | 22.82 | 0.743 | **0.347** |
| Ours | **22.92** | **0.760** | 0.390 |
> **Q2. How to deal with errors in MDE**
>
Due to MDE’s weakness in predicting geometry for data outside of distribution, this method does show a drop in performance when faced with testing data out of the distribution of depth estimator. However, we also point out that this is a fundamental weakness in all NeRF methods that utilize MDE, and the strength of our method derives from the fact that our method allows the adaptation of off-the-shelf pre-trained MDE to a scene without any additional training or dataset, unlike SCADE.
> **Q3. Inference speed of our model**
>
The inference speed of our model is identical to that of K-planes, which is our baseline model.
[1] Uy et al., SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates, CVPR 2023
[2] Fridovich-Keil et al., K-Planes: Explicit Radiance Fields in Space, Time, and Appearance, CVPR 2023.
[3] Lin Yen-Chen. Nerf-pytorch. 2020. | Rebuttal 1:
Rebuttal: # General Response
We would like to first thank all the reviewers for their helpful suggestions and constructive reviews. We are greatly encouraged by their assessment of our work as **well-motivated** (VhkU), with novel and **interesting** (oMo1, VhkU) findings of effectively **resolving MDE’s ambiguities** (KpuH, VhkU), and its exploitation at **unseen viewpoints** (Ndsp, KpuH, VkhU) thus **improving upon previous models** (Eioc, KpuH). The reviews assess our work as displaying **state-of-the-art** (KpuH) and **attractive performance** (oMo1) on **real-world** (KpuH, oMo1) benchmarks, well-supported by **sufficient ablation** (oMo1, KpuH, Ndsp) and **good presentation** (oMo1, KpuH). We are grateful that they saw significance in our quantitative and qualitative improvement over our baselines, achieving few-shot novel view synthesis quality competitive to current SOTA models. We carefully address each concern given by reviewers with detailed explanations and supporting experimental results.
We clarify our work’s distinction from SCADE [1], which has been brought up by multiple reviewers due to its similar motivation of exploiting MDE for few-shot NeRF. Our work proposes an online, scene-specific adaptation of MDE which **directly adapts it to predict canonical geometry** in accordance with NeRF under optimization. On the other hand, SCADE **injects uncertainty into MDE** so that canonical geometry can be estimated through probabilistic modeling between multiple estimated depths. Therefore, while the two works’ ultimate goal regarding MDEs is similar (ambiguity removal), the core ideas behind them are orthogonal. Another important distinction is that our method of MDE adaption allows it to be leveraged to **predict depths of unknown viewpoints for artifact removal and geometry regularization**, which is a novel contribution of our method. We further elaborate on this distinction with much deeper detail in our first response to Reviewer Ndsp.
[1] Uy et al., SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates, CVPR 2023
Pdf: /pdf/1d7497e50a2965d9acddfa7faca21e9b658e170e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a sparse-view NeRF framework that jointly trains a NeRF with a monocular depth estimator. By adapting the MDE network to the target scene, the predicted depth will provide better geometry prior for the NeRF model.
Strengths: The joint training of MDE and NeRF improves the model's ability compared to existing methods.
After training both models, we'll have a better NeRF that renders more accurate novel views and also a more accurate mono depth estimator.
Weaknesses: - For the few-shot NeRF setting, since we hope to train the NeRF directly, we shall have access to camera poses for multi-views, right?
If that's the case, what's the benefit of using monocular depth estimators, compared to multiview stereo networks such as MVSNet?
- If we use COLMAP to obtain the camera poses, we should also be able to construct a point cloud. What's the benefit of using the proposed distillation mechanism, compared to overcoming the scale and shift ambiguity problem of mono depth with depths extracted from point clouds?
- How is Tab. 2 calculated? Are they evaluated on the trained views of ScanNet?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please refer to weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Please refer to weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
> **Q1. Benefits of MDE compared to MVS networks**
>
Thank you for pointing this out. It is true that MVS networks can be used for better NeRF training, as shown in previous works [1, 2]. However, their approach of leveraging image features combined from multiple viewpoints serves as a critical weakness in more extreme wide-baseline scenarios where the number of input viewpoints is very limited and the distances between known cameras are large, due to increased difficulty in finding consensus between image features - resulting in a sharp drop in performance.
Our methodology of leveraging monocular depth networks’ powerful geometric prior helps overcome such weaknesses of MVS methods. As MDE predicts geometric information for each image independently, which fundamentally bypasses wide-baseline scenario problems of previous MVS methods. In addition, our MDE regularization method can be **directly extended to unseen viewpoints** for stronger regularization and artifact removal, which is impossible for standard MVS methods that only operate upon input viewpoints.
In this way, our paper demonstrates how these desirable qualities of MDE can be effectively leveraged in sparse-view NeRF settings in a complementary manner, orthogonal to previous MVS methods and effectively improving upon their weaknesses.
> **Q2. Comparison to using COLMAP point cloud for scale-shift fitting**
>
It is true that the point cloud acquired through COLMAP provides us with absolute depth information, and is beneficial for aligning the scale-shift value of the predicted depth, especially in the acceleration of the early stages in training.
However, since COLMAP point cloud only provides sparse depth information, it falls short in modeling local (as described in our paper, patch-wise) scale-shift values, as the points are unequally distributed in the point cloud and the sparse regions do not provide our model enough geometric information for patch-wise scale-shift value modeling. Since local MDE alignment is precise solution than global scale-shift fitting [3], we find that solely using dense information predicted by MDE allows us to robustly capture patch-wise scale-shift values for depth fitting regardless of where the patch is sampled.
Also, because COLMAP point clouds are generated from input images, they are severely limited from providing our model with accurate geometric data in unseen regions, which makes it unfit for out unseen viewpoint depth regularization loss. Therefore, we find that using rendered depth data captured by NeRF provides us stronger advantages than naïvely employing COLMAP point cloud to overcome the scale and shift ambiguity problem of monocular depths.
> **Q3. Details of evaluation method in Tab.2**
>
We evaluated adapted MDE model only on test views of ScanNet. Table 2 is for evaluating view consistency of MDE model [4], so we utilize a single scaling factor s for each scene, which is the median scaling value averaged across all test views.
[1] Wei et al., NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo, ICCV 2021
[2] Chibane et al., Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes, CVPR 2021
[3] Zhang et al., Hierarchical normalization for robust monocular depth estimation, NeurIPS 2022
[4] Zhang et al., Consistent depth of moving objects in video, SIGGRAPH 2021. | null | null | null | null | null | null |
PointGPT: Auto-regressively Generative Pre-training from Point Clouds | Accept (poster) | Summary: This paper investigates self-supervised point cloud learning by introducing the GPT concept (e.g., point order) to the masked modeling framework. Specifically, the clustered point patches are arranged into ordered sequences based on spatial proximity. Then, the masked patches can be predicted without leaking their position (e.g., patch centers) in an auto-regressive manner. Experiments are conducted on object tasks.
Strengths: 1. In my view, the most attractive point is the enlarged training set, which brings significant performance improvement over the competitors. (cf. Tab 1). This justifies the importance of the pretraining dataset for point clouds.
2. Significant performance on object tasks (e.g., shape classification), mainly due to the enlarged pertaining dataset.
3. The introduced relative direction prompt via point order is also reasonable, which avoids the position leak of patch reconstruction.
Weaknesses: 1. Although the relative direction prompt via point order is highlighted, it brings marginal performance improvement (cf. Tab 4 (e)). However, the enlarged pretrained dataset contributes the most performance improvement and is understated. Therefore, in my view, a paper revision should be conducted.
2. The proposed dual masking strategy sets some attention values to zero, which is similar to the popular attention dropout (cf. attention is all your need). Please clarify their differences.
3. Besides the object-level tasks, more practical downstream tasks (e.g., detection and segmentation with scene input) are expected to be evaluated, especially when the scene datasets are used in the pretraining stage in this paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See the weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Non
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **@Q1 - The effectiveness of PointGPT's designs**.
**(1) Relative direction prompt is able to enhance the generalization ability at a negligible cost**. Ablation experiments are conducted in a **high-capacity model training scenario**, where superior pre-training generalization is required. We re-train PointGPT-B using absolute positional encoding (`PointGPT-B-A`) on ScanObjectNN (OBJ_BG, OBJ_ONLY, PB_T50_RS) and ModelNet40 (1k points) datasets, and report the accuracy, the number of parameters (Params), and total pre-training time (Time). The results indicate that adopting the relative direction prompt leads to noticeable performance improvement at a negligible cost. We will incorporate the ablation experiments and results in the revised version to clarify the effectiveness of the relative direction prompt.
|Methods|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet40|Params|Pre-training time|
-|-|-|-|-|-|-|
|PointGPT-B-A|95.2|94.5|91.3|94.2|82.1M|75 hours|
|PointGPT-B|95.8|95.2|91.9|94.4|82.1M|75 hours|
**(2)** Additionally, we conducted more experiments and verify that **the improvements introduced by PointGPT-B/L are closely associated with the designs of PointGPT**, which effectively address the information leakage and enhance the generalization ability. Specifically, additional experiments are conducted by re-training the well-performing methods Point-MAE and Point-M2AE using the ViT-B configuration with UHD and LHD datasets (`Point-MAE-B` and `Point-M2AE-B`). Experimental results exhibit the **significant performance improvement achieved by PointGPT under the same utilization of large datasets and model parameters**, despite Point-M2AE benefiting from the multi-scale features. For a fair comparison, we exclude methods using cross-modal information and teacher models.
**(3) In the revised article, we will emphasize the importance of both PointGPT's designs and the training datasets**. This includes emphasizing the role of large datasets in improving accuracy in the Abstract and Introduction, as well as visually showcasing the performance gains achieved through enlarging the dataset size in the paper.
|Methods|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet40|Params|Pre-training time|
|-|-|-|-|-|-|-|
|Point-MAE-B|94.2|93.9|90.2|94.2|120.1M|73 hours|
|Point-M2AE-B|95.2|94.3|91.2|94.3|77.5M|156 hours|
|PointGPT-B|95.8|95.2|91.9|94.4|82.1M|75 hours|
**@Q2 - Differences between the masking strategy and attention dropout**.
**(1) The effect on information leakage**. Although both methods involve the exclusion of random regions, the regions masked by the masking strategy remain consistent across different attention layers, thereby preventing information leakage from masked regions. In contrast, attention dropout varies across different layers, allowing the information dropped out in one layer to still be learned in other layers, resulting in information leakage.
**(2) The impact on attention weight computation**. The masking strategy involves setting `the elements of the mask matrix` $M^{d}$ to 0. This results in the corresponding position being assigned a value of $-\infty$ during the softmax calculation, entirely eliminating the influence of masked regions on attention weight computation. In contrast, attention dropout simply sets the `obtained attention weights` to 0, allowing dropped regions to retain their impact on the calculation of attention weights for visible regions.
**(3) Masking strategy outperforms attention dropout significantly**. Ablation studies are conducted to directly exhibit the effect of masking strategy and attention dropout. We re-train the PointGPT-S model with the attention dropout (`Dropout`), and the accuracy on the ScanObjectNN (OBJ_BG, OBJ_ONLY, PB_T50_RS) and ModelNet40 (1k points) datasets is reported below. The results demonstrate the superior performance achieved by the masking strategy in comparison to the attention dropout.
|Methods|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet40|
|-|-|-|-|-|
|Dropout|89.1|87.9|84.3|93.1|
|Ours|91.6|90.0|86.9|94.0|
**@Q3 - Practical downstream tasks**.
Thanks for your suggestions! We evaluate the performance of PointGPT in the **object detection task on the nuScenes dataset** and observe its adaptability to this task with minor adjustments for handling large-scale point clouds. Specifically, object detection experiments are conducted on the nuScenes dataset. PointGPT utilizes 12 attention blocks, each equipped with 8 heads, 128 input channels, and 256 hidden channels. Pre-training and fine-tuning of PointGPT are performed using the nuScenes dataset, excluding the post-pre-training stage. We compare PointGPT with several well-performing methods and report the major official metrics, mean Average Precision (mAP), and nuScenes detection score (NDS).
|Methods|Reference|mAP|NDS|
|-|-|-|-|
|VISTA [1]|CVPR 2022|63.0|69.8|
|Focals Conv [2]|CVPR 2022|63.8|70.0|
|Transfusion [3]|CVPR 2022|65.5|70.2|
|PointGPT||66.8|71.3|
The specific modifications are as follows: (1) Voxelization: Replacing KNN-based grouping with voxelization, considering each voxel as a point patch. (2) Shift window attention: Confining the receptive field of the attention within a window and employing the window shift operation. (3) Point sampling: Randomly sampling up to K points within each voxel as the reconstruction targets.
[1] Deng, Shengheng, et al. "Vista: Boosting 3d object detection via dual cross-view spatial attention." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[2] Chen, Yukang, et al. "Focal sparse convolutional networks for 3d object detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[3] Bai, Xuyang, et al. "Transfusion: Robust lidar-camera fusion for 3d object detection with transformers." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. The effectiveness of the relative direction prompt and the proposed masking strategy is well received. Given that I am still concerned about the marginal improvement (i.e., about 0.5% Acc on average in Tab of Q1(2)) when fairly compared with the STOA (e.g., Point-M2AE), I will slightly raise my score to Borderline Accept.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback!
Comment: Thanks for considering our responses and raising the score! We will update our revised paper according to our discussions. Thanks again for your insightful and constructive suggestions that improve paper quality. We will be happy to address any further questions or concerns about the work. | Summary: This paper proposes PointGPT, a new self-supervised learning strategy for 3D representation learning. PointGPT follows the success of autoregressive pre-training paradigm in NLP and adapts it into 3D point clouds. With larger pre-training dataset and a post-pre-training stage, PointGPT achieves SOTA performance on different benchmarks.
Strengths: 1. It's good to introduce GPT-style pre-training into 3D tasks. Although this idea is straightforward, this paper is the first attempt to my best knowledge.
2. The motivation of GPT pre-training in 3D is also reasonable and well clarified (avoid shape information leakage).
3. The writing, equations, and figures are easy to follow.
Weaknesses: 1. The experiment tables should incorporate the comparison of learnable parameters during training. It's very important to know the parameters of 3D pre-training to better judge the contribution.
2. How about other methods with larger pre-training dataset and post-pre-training? This is also a necessary ablation study to the paper.
3. It's better to cite this related paper in 'Methods using cross-modal information and teacher models':
Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders, CVPR 2023
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I think it's an interesting paper. Expect the author rebuttal to solve my concerns for increasing the rating.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **@Q1 - Learnable parameters**.
Thanks for your suggestions! We will provide the number of learnable parameters (Params) in the revised article. For a fair comparison, we do not consider methods that utilize cross-modal information and teacher models, and mainly present the number of model parameters for well-performing single-modal self-supervised models here.
|Models|Point-BERT|Point-MAE|Point-M2AE|PointGPT-S|PointGPT-B|PointGPT-L|
|-|-|-|-|-|-|-|
|Params|25.5M|29.0M|15.3M|19.5M|82.1M|242.2M|
**@Q2 - Larger pre-training dataset and post-pre-training**.
**PointGPT outperforms comparison methods with comparative training parameters and the same training data**. Specifically, experiments are conducted by re-training Point-MAE and Point-M2AE using the ViT-B configuration with collected larger datasets and post-pre-training stage. For a fair comparison, we focus on re-training well-performing single-modal self-supervised methods, Point-MAE, and Point-M2AE (`Point-MAE-B` and `Point-M2AE-B`). The results, encompassing ScanObjectNN (OBJ_BG, OBJ_ONLY, PB_T50_RS), ModelNet40 (1k points), number of parameters (Params), and total pre-training time, are presented below, which demonstrate the superior performance with high-capacity models. Additionally, we observe that Point-M2AE requires twice the pre-training time compared to our PointGPT, mainly due to its hierarchical architecture and dedicated multi-scale masking strategy.
|Methods|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet40|Params|Pre-training time|
|-|-|-|-|-|-|-|
|Point-MAE-B|94.2|93.9|90.2|94.2|120.1M|73 hours|
|Point-M2AE-B|95.2|94.3|91.2|94.3|77.5M|156 hours|
|PointGPT-B|95.8|95.2|91.9|94.4|82.1M|75 hours|
**@Q3 - Related work**.
Thanks for your suggestions! We will cite this related paper 'Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders'. Additionally, we will include more recent articles to enhance the coverage of our related work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. It addresses most of my concerns. It looks like PointGPT has good parameter and training efficiency. I recommend the authors include these results (parameter comparison and -B results) in the revised version. I will raise the rating.
Also, I expect the authors to cite and discuss two other related works in CVPR 2023.
[1] Parameter is not all you need: Starting from non-parametric networks for 3d point cloud analysis. CVPR 2023
[2] Meta Architecture for Point Cloud Analysis. CVPR 2023
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback!
Comment: Thanks for upgrading your score and providing valuable feedback! The submitted results will be integrated into the revised version, and these related papers will be cited and discussed in the revised article.
**(1)** In the first paper, an innovative non-parametric network, Point-NN, is proposed. Building upon this, the Point-PN network is introduced, achieving promising performance across diverse tasks with only a few learnable parameters.
**(2)** The second paper proposes a unified framework called PointMeta, exploring appropriate practices for each component to derive a fundamental building block named PointMetaBase. The proposed approach improves accuracy while reducing computational complexity.
These approaches introduce effective frameworks that yield promising performance across a range of tasks. However, they still necessitate fully-supervised training from scratch. In contrast, PointGPT is able to acquire latent representations without relying on annotations, achieving enhanced generalization in downstream tasks. Notably, **our PointGPT consistently demonstrates superior performance** across the majority of benchmark evaluations. This verifies the effectiveness of our autoregressive generative pre-training approach for point clouds, affirming its capacity to enhance generalization and accuracy in downstream tasks.
Thanks again for your insightful and constructive suggestions that improve paper quality! We will be happy to address any further questions or concerns about the work. | Summary: This paper propose an auto-regressively generative pre-training paradigm for point cloud feature encoding. By incorporating GPT, the disorder and low information density properties of point clouds are addressed. Besides GPT, a dual masking strategy is proposed to improve the pre-training performance. The proposed models achieves SOTA o downstream tasks.
Strengths: 1. A novel GPT-style point cloud pre-training framework is proposed.
2. Arranging point patches via Morton-order curve is effective.
3. The performance of the PointGPT-L is extraordinary, significantly surpassing previous methods.
Weaknesses: See questions.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I think this is an excellent work. But I still have some questions:
1. Why PointGPT outperforms PointMAE and PointM2AE? I don’t understand what is ``overall object shape leakage`` in L109. In L163, PointGPT also utilizes position information to extract global structural information.
2. The dual masking strategy seems that further introduce MAE to PointGPT. What’s the performance when using the masking strategy only and removing the GPT loss?
3. The training of PointGPT-S is aligned to previous pretraining methods and PointGPT-S only presents minor improvement compared with Point-M2AE. The PointGPT-B and PointGPT-L are trained on larger datasets but the competitive methods Point-M2AE and Point-MAE are only trained on ShapeNet. Please provide an ablation study such as Point-M2AE+UHD+LHD to show that the large performance gain doesn’t trivially come from the larger training dataset.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The Morton-order curve may not be the optimal order.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **@Q1 - Overall object shape leakage**.
**(1) The overall object shape leakage is attributed to the positional encoding leakage of masked regions**. Point cloud data is constituted by the spatial positions of individual points. However, previous methods rely on introducing positional encoding into mask tokens to specify prediction regions, leading to the positional encoding of masked regions being leaked. Consequently, the model can effortlessly infer the overall shape. **(2)** In contrast to prior methods, **PointGPT entirely masks the positional encoding of masked regions**. To achieve this, PointGPT eliminates the need for mask tokens and their associated positional encoding by utilizing the auto-regressive prediction pattern and the relative direction prompt. Consequently, while positional encoding remains employed, our method ensures the complete masking of information from masked regions.
**@Q2 - Using the masking strategy only and removing the GPT loss**.
Thanks for your valuable feedback! We directly train the PointGPT-S model on the target datasets and maintain the dual masking strategy (`PointGPT-S-D`), as the removal of GPT loss would render pre-training infeasible. The experiments are performed on the ScanObjectNN (OBJ_BG, OBJ_ONLY, PB_T50_RS) and ModelNet40 (1k points) datasets, and the results are shown below.
|Methods|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet40|
|-|-|-|-|-|
|PointGPT-S-D|86.2|86.1|79.8|92.3|
|Ours|91.6|90.0|86.9|94.0|
We believe that your question aims to investigate the effectiveness of the dual masking strategy and the GPT loss. Therefore, we additionally conducted the following experiments focusing on these aspects.
**(1) Dual masking strategy experiments**. Experiments are conducted by varying the masking ratio on the ModelNet40 dataset, the results reveal that utilizing the dual masking strategy with an appropriate masking ratio significantly boosts accuracy.
|Ratio|0%|10%|30%|50%|70%|90%|
|-|-|-|-|-|-|-|
|Acc.|93.68|93.70|93.85|94.01|94.21|93.66|
**(2) GPT loss experiments**. We conducted two supplementary experiments. **(i) Removal of GPT loss** (`PointGPT-S-R`). The PointGPT-S model is directly trained on the target datasets without the pre-training stage and masking strategy, serving as the baseline for the PointGPT-S framework. The results reveal the significant accuracy enhancement achieved by the PointGPT pre-training stage. **(ii) Replacement of GPT pre-training with MAE pre-training** (`PointGPT-S-M`). The auto-regressive pre-training of PointGPT is replaced with Point-MAE's masking and reconstruction pre-training, while utilizing the dual masking strategy. The findings suggest that under the application of the dual masking strategy, PointGPT's pre-training method achieves superior generalization ability.
|Methods|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet40|
|-|-|-|-|-|
|PointGPT-S-R|86.7|87.1|79.8|92.3|
|PointGPT-S-M|89.6|88.3|81.7|92.5|
|Ours|91.6|90.0|86.9|94.0|
**@Q3 - Comparison with Point-M2AE**.
**(1) PointGPT-S achieves an enhanced accuracy-time trade-off**. It adopts a more concise design compared to Point-M2AE, which incorporates hierarchical architecture and a specialized multi-scale masking strategy involving backtracking. Consequently, **PointGPT-S achieves a remarkable 50% reduction in pre-training time**. This achievement holds substantial practical significance, conserving considerable wall-clock time, particularly when dealing with larger datasets and model parameters. Furthermore, PointGPT-S outperforms Point-M2AE-S across all tasks, despite Point-M2AE benefiting from multi-scale features.
|Methods|Params|Flops|Pre-training time|
|-|-|-|-|
|Point-M2AE-S|15.3M|4.0G|32.5 hours|
|PointGPT-S|19.5M|2.2G|15.8 hours|
**(2)** Additionally, to demonstrate **the superior performance of PointGPT with high-capacity models**. Experiments are conducted by re-training Point-MAE and Point-M2AE using the ViT-B configuration with UHD and LHD datasets (`Point-MAE-B` and `Point-M2AE-B`). The results are depicted below, encompassing ScanObjectNN (OBJ_BG, OBJ_ONLY, PB_T50_RS) and ModelNet40 (1k points). These results demonstrate that **PointGPT outperforms comparison methods with comparative training parameters and identical training data**, and the improvements introduced by high-capacity PointGPT models are closely linked with the designs of PointGPT.
|Methods|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet40|Params|Pre-training time|
|-|-|-|-|-|-|-|
|Point-MAE-B|94.2|93.9|90.2|94.2|120.1M|73 hours|
|Point-M2AE-B|95.2|94.3|91.2|94.3|77.5M|156 hours|
|PointGPT-B|95.8|95.2|91.9|94.4|82.1M|75 hours|
**@Q4 - Morton code sorting**.
We advocate for Morton code sorting as the appropriate solution. **(1) Morton code sorting effectively preserves the adjacency relationship** between points, with adjacent points in one-dimensional space often being proximate to each other in three-dimensional space. Moreover, **(2) experimental results demonstrate the improved accuracy achieved by Morton sorting**. Ablation studies are performed on ScanObjectNN (OBJ_BG, OBJ_ONLY, PB_T50_RS) and ModelNet40 (1k points) datasets, using the PointGPT-S models. The performance is compared with the popular `KD-tree` sorting [1] and `Hilbert` code sorting [2] methods, indicating that Morton code sorting significantly outperforms other sorting methods. **In the future, we will strive to explore better sorting methods.**
|Methods|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet40|
|-|-|-|-|-|
|KD-tree|89.0|88.6|81.3|93.2|
|Hilbert|89.2|89.0|84.3|93.4|
|Ours|91.6|90.0|86.9|94.0|
[1] Gadelha, et al. "Multiresolution tree networks for 3d point cloud processing." Proceedings of the European Conference on Computer Vision (ECCV). 2018.
[2] Hilbert, et al. "Über die stetige Abbildung einer Linie auf ein Flächenstück." Dritter Band: Analysis· Grundlagen der Mathematik· Physik Verschiedenes: Nebst Einer Lebensgeschichte (1935): 1-2. | Summary: This paper proposed a point clouds pretraining method named PointGPT, which extends the generative pretraining approach of NLP to point clouds. With point patch partitioning and sorting, point embeddings are feed into a transformer decoder for autoregressive prediction. Besides, a dual masking strategy is proposed to enhance the learned representation. PointGPT is evaluated on several point cloud classifiaction, few-shot classification and part segmentation datasets with promising results.
Strengths: - Extend generative pretraining into point clonds. pointGPT let us to rethink the feasibility of generative pretraining on point cloud tasks.
- A complete framework with point sequencer and dual masking strategy. Re-arranging point clouds into a sequence of tokens like natural text and RGB images is challenging. PointGPT gives one possible solution using Morton code. The dual masking strategy appears to be a combination of masked modeling and autoregressive pretraining.
- Promising results on a set of datasets including classfication, few-shot classification and part seggmentation.
Weaknesses: - Is it reasonable to use Morton code to sort unordered point clouds? As shown in Fig.2, PointGPT forces the current point $i$ to predict the coordinates of a point sorted by Morton code. However, the point $i$ should be able to predict any point adjacent to its 3D space, rather than a specified point. So what are the advantages of using Morton code sorting as the prediction target compared to conventional left-to-right and top-to-bottom prediction?
- The real gain of PointGPT (PointGPT-S) compared with the previous masked modeling methods (Point-M2AE) is not significant. Although PointGPT-B/L show an improvement on these tasks, they use more parameters and more training data.
- The overall contribution is limited. PointGPT is inspired by generative pretraining in NLP, but it doesn't show us how effective the generative pretraining approach is on point cloud tasks compared to previous methods.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **@Q1 - Morton code sorting**.
**(1) Morton code sorting effectively preserves the adjacency relationships between points**, allowing points that are close in three-dimensional space to maintain adjacency after sorting. However, the left-to-right and top-to-bottom (`L2R&T2B`) sorting method struggles to achieve this, due to the sparsity of point clouds. Furthermore, **(2)** to **prevent each point patch from predicting a fixed patch**, data augmentations, like rotation and translation, are applied to introduce variations in the order of the sorted patches. Consequently, each patch is tasked with predicting different patches under various transformations, necessitating accurate predictions within its local neighborhood. Notably, **(3)** experimental findings illustrate that **Morton sorting achieves improved accuracy**. To intuitively validate the effectiveness of Morton sorting, we perform additional experiments employing the PointGPT-S model on ScanObjectNN (OBJ_BG, OBJ_ONLY, PB_T50_RS) and ModelNet40 (1k points) datasets. The performance is compared with `L2R&T2B`, as well as the widely adopted `KD-tree` sorting [1] and `Hilbert` code sorting [2] methods. The results indicate that Morton code sorting outperforms alternative sorting methods.
|Methods|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet40|
|-|-|-|-|-|
|L2R&T2B|88.8|88.4|80.8|93.2|
|KD-tree|89.0|88.6|81.3|93.2|
|Hilbert|89.2|89.0|84.3|93.4|
|Ours|91.6|90.0|86.9|94.0|
Details: We implement L2R&T2B by dividing the point cloud into multiple 0.06x0.06x0.06 grids [3] and sorting the points according to their grid coordinates, with a primary sorting based on the x-axis, followed by the y-axis, and ultimately the z-axis.
[1] Gadelha, et al. "Multiresolution tree networks for 3d point cloud processing." Proceedings of the European Conference on Computer Vision (ECCV). 2018.
[2] Hilbert, et al. "Über die stetige Abbildung einer Linie auf ein Flächenstück." Dritter Band: Analysis· Grundlagen der Mathematik· Physik Verschiedenes: Nebst Einer Lebensgeschichte (1935): 1-2.
[3] Thomas, et al. "Kpconv: Flexible and deformable convolution for point clouds." Proceedings of the IEEE/CVF international conference on computer vision. 2019.
**@Q2 - Comparison with Point-M2AE**.
**(1) PointGPT-S attains an improved accuracy-time balance**. PointGPT adopts a more concise design compared to Point-M2AE, which incorporates hierarchical architecture and a specialized multi-scale masking strategy involving backtracking. Consequently, **PointGPT demonstrates a remarkable 50% reduction in pre-training time**, which holds significant practical value, saving considerable wall-clock time when scaling up datasets and model parameters. Furthermore, PointGPT-S surpasses Point-M2AE-S across all tasks, despite Point-M2AE benefiting from multi-scale features. Importantly, PointGPT is not confined to small-scale models, we explore the potential of high-capacity models.
|Methods|Params|Flops|Pre-training time|
|-|-|-|-|
|Point-M2AE-S|15.3M|4.0G|32.5 hours|
|PointGPT-S|19.5M|2.2G|15.8 hours|
**(2)** The additional experiments demonstrate that **the improvements introduced by PointGPT-B/L are closely linked with the designs of PointGPT**, which effectively address the information leakage and enhance the generalization ability. Specifically, experiments are performed to re-train Point-M2AE with ViT-B configurations using UHD and LHD datasets (`Point-M2AE-B`). The results are presented below, encompassing ScanObjectNN (OBJ_BG, OBJ_ONLY, PB_T50_RS), ModelNet40 (1k points), number of parameters (Params), and total pre-training time, demonstrating **the superior performance of PointGPT over Point-M2AE models, even under comparable training parameters and identical training data**.
|Methods|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet40|Params|Pre-training time|
|-|-|-|-|-|-|-|
|Point-M2AE-B|95.2|94.3|91.2|94.3|77.5M|156 hours|
|PointGPT-B|95.8|95.2|91.9|94.4|82.1M|75 hours|
**@Q3 - Contributions**.
Recent pre-training methods for point clouds also belong to the generative pre-training approach, such as Point-MAE. However, a foundational distinction lies in their adoption of BERT-style masked modeling pre-training approaches, contrasting with our exploration of GPT-style auto-regressively generative pre-training methods.
**Compared to preceding methods, PointGPT effectively addresses the information leakage**. Previous methods rely on introducing positional information to specify the regions for prediction, which leads to a significant and widespread issue of information leakage, limiting the efficacy of the pre-training process. To overcome this challenge, PointGPT utilizes an auto-regressive pattern and the relative direction prompt to specify prediction patches. This approach obviates the need for explicitly utilizing positional information, effectively addressing information leakage, and enhancing the generalization ability, as demonstrated in @Q2.
**Our contributions also encompass (1) the first attempt of GPT on point clouds and (2) the exploration of high-capacity model training** within the point cloud domain. (1) PointGPT explores the point cloud sorting methods to manage the disordered nature of point clouds. Additionally, we propose a dual masking strategy and an extractor-generator architecture to overcome the challenges associated with information density differences and gaps between the generation and downstream tasks. (2) To fully unleash the power of PointGPT, we collect a larger pre-training dataset. Moreover, a subsequent post-pre-training phase is introduced alongside a labeled hybrid dataset, facilitating the integration of semantic information from various sources into the models.
|(1) First attempt of GPT on point clouds|(2) High-capacity model training|
|-|--|
|point cloud sorting|post-pre-training stage|
|dual-masking strategy|unlabeled hybrid dataset|
|extractor-generator architecture|labeled hybrid dataset|
|relative direction prompt|| | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful feedback! We are pleased to find that reviewers 5U9K and 4dwU appreciate PointGPT as interesting and excellent work. Moreover, reviewers 4dwU and LJmd consider our motivation to be reasonable and well-explained, effectively mitigating information leakage. Furthermore, reviewers 4dwU and EUQH appreciate the novelty of our method as the first attempt of auto-regressive pre-training on point cloud tasks, prompting the rethinking of this approach in such tasks. We are delighted that all reviewers acknowledge the significant improvements and promising performance achieved by our approach. We have carefully considered all questions, concerns, and comments provided by reviewers and addressed all of them appropriately. We provide detailed responses to each review separately and believe that our responses address all of the reviewers' concerns. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On the Role of Noise in the Sample Complexity of Learning Recurrent Neural Networks: Exponential Gaps for Long Sequences | Accept (poster) | Summary: The authors consider the class of noisy multi-layered sigmoid recurrent neural networks, noisy meaning that noise is added to the output of each neuron in the network. They prove that the sample complexity of noisy is significantly better than non-noisy one with one clean upper bound and one clean lower bound.
Strengths: 1. Clear writing
2. Well organised presentation
3. Reasonable assumptions on an important model
4. Studied multilayer neural networks, making a complete story.
5. Interesting technique TV Cover, handy for composition classes
Weaknesses: Here are some minor weaknesses.
1. The authors could have discussed more about the practical implications. It occurs to me that adding noise might be one of the keys to next generation AI now that deep learning is currently struggling with sampling complexity.
2. Would be better if there could be discussion about expressive power.
3. sigmoid not RELU.
4. lack of experimental results
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: How much does noise affect expressive power?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors adequately addressed most of the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their suggestions and comments. We discuss the questions and concerns mentioned by the reviewer in the following.
**Response to weaknesses.** We agree with the reviewer that practical result can back up the results of the paper and we hope to have this as a future work. Also, extending the results to ReLU networks rather than the sigmoid function is a future direction for this work that can increase its generality.
**Response to Question.** This is a very good question. We want to mention that as discussed in the paper our results also work in the regime where we have negligble amounts of noise and the noise is only required to enable the analysis. So if we are considering the current computers with finite precision, the noisy and non-noisy networks perform almost the same when implemented on them and we can get exponential improvements in the sample complexity without losing any expressive power. A related discussion is also included in Lines 61-66 and 210-213. We think analyzing the expressive power for the regime where the noise is not negligible and therefore these two classes may have different performances is an interesting future direction.
---
Rebuttal Comment 1.1:
Comment: Very good, makes sense. | Summary: This work studies the sample complexity of PAC learning noisy recurrent neural networks with respect to the ramp loss. Noisy recurrent networks are defined as multi-layered feed forward networks with sigmoid activation where independent mean-zero Gaussian noise is added to the activation.
The main result is that one can learn these noisy networks with sample complexity $\tilde{O}(\frac{w\log(T/\sigma) + \log(1/\delta)}{\epsilon^2})$ (Theorem 15). Though different in terms of loss function and noise assumption with prior work, the bound notably avoids any dependence on the norms of the weights, which was common in upper bounds in prior work. These results are much more intriguing once contrasted to its non-noisy counterpart, i.e., the fact that non-noisy PAC learning has a lower bound of $\Omega(\frac{wT+\log(1/\delta)}{\epsilon^2})$ (Theorem 10), which exhibits linear dependence on $T$ as opposed to $\log(T)$ as in the noisy case.
The techniques used to prove the upper bound may be of independent interest to the PAC learning community. The authors use techniques derived from Fathollah Pour and Ashtiani (2022) to study covering numbers of random hypotheses and extend these tools so that their analysis is based on uniform covering number with respect to total variation distance.
Strengths: - The paper shows an interesting contrast between learning noisy vs. clean recurrent neural networks. There is surprisingly an exponential gap in sample complexity w.r.t. $T$ and that noisy networks are easier to learn. This gives way to the suggestion that learning clean networks can "bypass" the existing hardness result by assuming there is an infinitesimally small noise $\sigma$ close to 0 for the clean network, allowing the actual sample complexity upper bound be smaller in case of finite precision machines.
- The proof techniques are interesting as the paper works with random hypotheses and establishing sample complexity via covering numbers under such randomness.
Weaknesses: - The motivation to study noisy recurrent neural networks, as it is defined in this paper, is unclear to me. The formulation is similar to a noisy dynamical system but it is unclear why such noise assumption would be natural here for neural networks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - On one hand, learning noisy networks being easier than learning clean networks is surprising since it's noisier. Yet, it may be the case that, by adding noise (smaller signal-to-noise ratio), it becomes easier to learn as more hypotheses may seem to plausible (as if blurring an image). Is this why the noise is helping the learner?
- What is the main reason for studying the ramp loss?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper is clear with limitations and comparison to prior/future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their assessment and feedback. We address the concerns and questions raised by the reviewer in the following.
**Response to Weaknesses.** Using noise in training neural networks is a natural heuristic to avoid overfitting and has been used in various scenarios (e.g., in denoising autoencoders, drop-out noise, noisy RNNs, etc.). For RNNs, the issue of overfitting is more serious especially for learning from long sequences. Our study can quantify the effect of noise for generalization in RNNs.
But perhaps more importantly, we use noisy RNNs as a "proof technique". As discussed in Lines 61-66 and 210-213, in practice we can set the amount of noise so negligible that when implemented on a device with finite precision the network performs almost similar to the natural non-noisy version, without sacrificing the performance. In other words, our analysis shows that there is a stark difference between finite precision machines and true real-valued machines when it comes to training RNNs.
On a more technical level, adding noise to the network makes it possible to use the TV covering number analysis which comes with a useful composition theorem. It makes it possible to use new techniques and exploit the fact that the same function in being applied recursively and get a sample complexity bound that is exponentially smaller than the non-noisy version.
**Response to Question 1.** Although it may be intuitive that noise makes the class less complex, one of our contributions is actually proving a bound on the complexity of noisy recurrent models by using new techniques and analysis for random functions. It is worth mentioning that we are analyzing the sample complexity of PAC learning this class; we want to find the smallest number of samples required to make sure that, for any data generating distribution, the error of the trained/returned network $\hat{f}\in\overline{\mathcal{F}}$ is comparable with the smallest expected error that is achievable by a noisy network from the same class $\overline{\mathcal{F}}$, i.e., $\arg\min_{f\in\overline{\mathcal{F}}}\mathbb{E}{l^{\gamma}(f,x,y)}$. It may seem intuitive that by adding noise to a single classifier $f$ we can hope to reduce its overfitting and bound the difference between its empirical and expected error. However, it is not obvious and immediate that the expected error of returned classifier $\hat{f}$ is close to the minimum possible expected error over the class of noisy networks, i.e., whether $\mathbb{E}{l^{\gamma}(\hat{f},x,y)} \leq \min_{f\in\overline{\mathcal{F}}}\mathbb{E}{l^{\gamma}(f,x,y)} + \epsilon$ with high probability, and if we can guarantee PAC learning with smaller sample complexity. Our main contribution is a novel technique for studying the complexity of noisy RNNs and proving that the sample complexity of PAC learning them is logarithmic in $T$.
**Response to Question 2.** As mentioned in Line 139, the main features of the ramp loss that we use are the Lipschitzness and boundedness, which are common in the literature of neural networks for analyzing the sample complexity. Otherwise, if the loss function is unbounded, it would be impossible to guarantee generalization for networks with unbounded weights. Our sample complexity analysis for noisy networks is more general and works for any loss function that has the above properties. We chose to use ramp loss because we wanted to have concrete theorem statements and to be able to contrast the upper bound of noisy network with the lower bound of the non-noisy networks.
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: I have read the rebuttal. It would certainly be interesting to see if the upper bound of the noisy network can be generalized to deal with any Lipschitz bounded function w.r.t. related parameters like Lipschitzness.
Thank you for the response.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer KR7t
Comment: Thanks for reading our rebuttal and the response. We would like to elaborate more on why our upper bound holds for any bounded Lipschitz loss function. Lemma 42 in Appendix E is the main result for translating the covering number of the network to that of its composition with the loss. The only property of ramp loss that is used in this proof is its Lipschitzness, i.e., Line 838. Replacing ramp loss with any other loss with Lipschitz continuity of $L$ will lead to a similar result, i.e., if we let $\mathcal{F}\_{L}$ to be the composition of $\mathcal{F}$ with the Lipschitz loss, then we have $N_U(\epsilon, \mathcal{F}_{L}, m,||.||_2^{\ell_2}) \leq N_U(\epsilon/L, \mathcal{F}, m,||.||_2^{\ell_2})$. Consequently, in the proof of Theorem 15 we would only need to use a covering number with accuracy of $\epsilon/L$ instead of $\epsilon\gamma$ between Lines 782 and 783. This results in a logarithmic dependence on $L$ in the final sample complexity, i.e., Theorem 15 for a loss with Lipschitz continuous factor of $L$ would state a sample complexity of $\tilde{O}\left(\frac{w\log (\frac{TL}{\sigma})+\log(1/\delta)}{\epsilon^2} \right)$. | Summary: This paper studies learning recurrent neural networks with sigmoid activations, where a small amount of noise of magnitude sigma is added to each layer. This paper shows that there is a log(T/sigma) scaling of the sample complexity with the number of recurrent compositions.
This is surprising because, without the noise, there is a linear in T scaling of the sample complexity. Furthermore, the quantity of noise can be very small, since the log(1/sigma) dependence is mild.
Strengths: The paper is notationally heavy, but is clearly written and was therefore easy to follow. The comparison with past works was good, and I felt that the main result was surprising.
Weaknesses:
From a technical point of view:
* The proof of Theorem 10 (the sample complexity lower bound for learning with the Ramp Loss) is effectively the same as Koiran and Sontag (1998), who proved it for the 0-1 loss. The main observation is that neural networks with sigmoid activations can be forced to output approximately 0-1 valued functions by rescaling the final layer weights. And therefore the ramp loss is approximately equal to the 0-1 loss.
* Most of the steps in the proof of Theorem 26 draw heavily on the work of "Benefits of Additive Noise in Composing Classes with Bounded Capacity" by Fathollah Pour and Ashtiani, 2022. The main technical novelty seems to be in Theorem 24, which gives an inequality for the TV covering number under composition. Did I understand correctly that this is the main new element?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * Is the noise $\mathcal{G}_{\sigma}$ in Definition 13 reused, or fresh noise on each layer?
* There seem to be some minor technical issues with the proof of Theorem 10 as stated. In the proof of Lemma 11 (restated as Lemma 32), the authors on line 604 define $z$. The authors write the definition of $z$ as
$$z = \min_b \arg\max_{0 < x < 1/2} P[|Last(b^R(U,T-1))| \geq x] \geq 1-\eta.$$
This is nonstandard notation and it is unclear how to parse it. The authors state that "intuitively, z is the largest possible value such that "
$$P[-z < Last(b^R(U,T-1)) < z] < \eta$$ Considering this and the remainder of the proof, I am assuming that the authors meant
$$z = \max \\{x \in (0,1/2) : P[|Last(b^R(U,T-1))| \geq x] \geq 1-\eta \mbox{ for all } b\\}.$$
However, $z$ may not be well-defined because the maximum might not exist. Consider the network $b$ that sends any input to $Last(b^R(U,T-1)) = 0$ almost surely. Then no $x \in (0,1/2)$ is valid for the above problem -- we would have to take $x = 0$, which would lead to an issue with the proof since then $z = 0$.
* How does the construction for Theorem 10 break if sigma=1/T^C magnitude noise is added for some large constant C? Your Theorem 26 would predict T*polylog(T) sample complexity, ao the construction in Theorem 10 should break, but I don't see exactly how. Could you please provide some intuition?
Typos:
* Appendix C, Proof of Theorem 10, have [l]^{\gamma} and [l]^{0-1}, when it should be [l^{gamma}] and [l^{0-1}]
* Statement of Theorem 23, display math between lines 311 and 312. Why is there ". N_1" in the expression? Is it supposed to be times N_1?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their careful assessment of the paper and we appreciate their helpful comments. The questions and concerns mentioned by the reviewer are discussed in the following.
**Response to Weaknesses.** Yes, the key technical contribution is to prove a general result on the covering number of recurrent models based on the recurring class, and a sample complexity upper bound that is logarithmic in $T$ (hence, the surprising exponential gap). On the other hand, most (if not all) of the techniques used in the literature (for non-noisy networks) are based on "unfolding the recurrence" and result in a linear dependence on $T$. In fact, as we showed in the lower bound, for non-noisy networks, getting a (super-)linear upper bound is inevitable.
We would like to add that simply using the framework of Fathollah Pour and Ashtiani (2022) will again give a linear dependence on $T$. Therefore, we had to use a new proof technique to be able to exploit the fact that in RNNs, a fixed function is being reused recursively. With this and other nuances in the proof (e.g., handling the new data that is inputted to the network in every recurrence) we were able to prove the sub-linear (logarithmic upper) bound.
**Response to Questions 1.** The noise is i.i.d. in and fresh sample is used in each layer.
**Response to Questions 2.** We thank the reviewer for bringing this to our attention! In fact, there is a mistake in the current proof but can be fixed easily. In short, instead of defining a general value of $c$, we can define a specific value $c_b$ for any given network $b$ to resolve the issue mentioned by the reviewer.
To see this, we should have defined $z_b$ specifically for any network $b$ as $z_b = \sup \\{ 0\leq x<1/2:\mathbb{P}{\left|\text{Last}{b^R(U,T-1)}\right| \geq x} \geq 1-\eta \\}$. For any $b$ and its corresponding $f \in \mathcal{F_w}$ if $z_b>0$ we can multiply the last row of the weight matrix of the last layer by $c_b=\phi^{-1}(\gamma)/\phi^{-1}(z_b)$. In case $z_b=0$, we do not need to change any weights and we let $c_b=1$. We can conclude for the same function $b$ and the corresponding $h_b = \text{Last}(b^R(U,T-1)) \in \mathcal{H_w}$ we have $\\mathbb{E}_\{(U,y)\\sim \\mathcal{D}}{l_\{\\gamma} (h\_b,U,y)} \\leq \\mathbb{E}_\{(U,y)\\sim \\mathcal{D}}{l^{0-1}(f,U,y)} + \\eta$ as desired because the ramp loss and the $0-1$ loss output the same value with probability more than $1-\eta$. Therefore, the result of Lemma 11 and Theorem 10 are correct and still hold. We make sure to modify the proof of Lemma 11 and make it exact in the next versions of this paper.
**Response to Questions 3.** We hope we have understood the question correctly and if the reviewer is asking why if we set $\sigma$ to be inverse exponentially in $T$, then the upper bound in Theorem 15 becomes $T*polylog T$ but Theorem 10 suggests a lower bound of $\Omega(T)$. We want to emphasize that this is not a contradiction and the construction in Theorem 10 does not break. We would only have a loose upper bound that is not better than the lower bound in Theorem 10. The intuition behind having the loose upper bound when $\sigma\rightarrow 0$ is that we are analyzing the covering number with respect to TV distance which we show is sufficient for learning but it is not necessary in general. Intuitively, the noise is smoothing out the output and makes sure that with a small change in the weights of network the output distributions do not become very different, i.e., have small TV distance. However, when $\sigma\rightarrow 0$, the output distributions are almost close to the Dirac delta measures and the TV distance is almost equal to $1$, so we do not quite get the benefits that we wanted from adding noise. We showed that the sample complexity has mild logarithmic dependence on $1/\sigma$, so for moderately small values of $\sigma$ we would still get a tighter generalization bound for noisy networks. However, when $\sigma$ is set to be so small, one would lose the benefits of additive noise. In this regime, one can basically use the basic (super-)linear bounds for non-noisy RNNs.
**Typos.** Yes, in the statement of Theorem 23 it is supposed to be times $N_1$. We thank the reviewer for noticing the typos and make sure that we will fix it in future versions of the paper.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for answering my questions. I'm happy to keep my score. | Summary: In this paper, the authors consider a class of noisy recurrent neural networks under the ramp loss setting, and prove that the noisy class can be learned with $O(w \log (T/\sigma))$, where $w$ is the width, $T$ is the length of the sequence, and $\sigma$ is noise variance.
The derived results demonstrate the sample effiency when compared to the standard noiseless case.
The proof is based on the result of covering number of a class of random maps under the TV distance.
Strengths: - the sample complexity $O(w \log (T/\sigma))$ is proved for such noise RNN under the ramp loss
- the covering number under the TV distance is given
Weaknesses: - The motivation of using ramp loss is unclear to me and quite weak. For example, ramp loss is non-convex and not commonly used in practice. I understand that the ramp loss will be quite close to the 0-1 loss, and thus the derivation would be relatively easier, e.g., Lemma 11. Nevertheless, it decreases the technical difficulty as well as the motivation.
- The used techique is from (Fathollah Pour and Ashtiani 2022) on the covering number of a class of random maps under the TV distance. Their results have already applied to shallow and deeper NNs. The contribution appears not very significant under a somehow RNN with ramp loss. When checking the proof in a high level way, the key part, Lemma 11, aims to build the connection between the ramp loss and the 0-1 loss.
- According to the derived sample complexity, it shows that, under a larger noise level, this noisy class can be learned with fewer samples. It makes sense because noise (gradually become the main component) can be easier to be learned than the data. However, this result somethimes is not enough in my view. Because a larger noise injection would lead to a worse performance. Solely adding noise result in nothing. I think this requires more detailed discussion.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their comments and feedback. In the following, we address the concerns and questions mentioned in the review.
**Response to Weaknesses 1.** As we mentioned in Line 139, the only features of the ramp loss that we use to derive the upper bound are its Lipschitzness and boundedness. Our results are more general and work for any other loss that has these properties. It is also common in the literature to assume these two assumptions for the loss function when the generalization of neural networks are analyzed theoretically. Otherwise, if the loss function is unbounded, it would be impossible to guarantee generalization without making additional assumptions on the norms of the weights of network.
For the lower bound, we used the specific choice of ramp loss to be concrete in our theorem statements and to be able to contrast it with the upper bound. In fact, the constructions in Lemma 11 and, consequently, the lower bound in Theorem 10 only rely on the above two properties and an additional property that the loss function $l(z)$ converges to $0$ as $z$ goes to $\infty$ and converges to $1$ as $z$ goes to $-\infty$, which is again natural in the analysis of neural networks.
**Response to Weaknesses 2.** We would like to emphasize that Theorem 15 (main upper bound result) is based on the result of Theorem 24 which is novel and not present in Fathollah Pour and Ashtiani (2022) (FA22). As we discussed in Lines 312-331, simply using the result of FA22 will give a loose (linear) upper bound in terms of the length of the sequence. Therefore, the bound would not be better than those of the existing literature that simply "unfold the recurrence" and analyze the recurrent network as a larger and more complex class. On the other hand, we prove that with a more careful analysis of the noisy function it is possible to take into account the fact that the same fixed function is being applied recursively. Using the new analysis that we offer for any recurrent model, we conclude tighter generalization bound for RNNs which is logarithmic in $T$. When contrasted with our (linear) lower bound, we get the exponential gap as a surprising result. There are other nuances in the proofs that are reflected in the (relatively long) supplementary material.
**Response to Weaknesses 3.** Proving a sample complexity bound for noisy RNNs that is logarithmic in $T$ is one of the main contributions of our paper and it was not, to the best of our knowledge, present in the literature before. We agree that intuitively, it is true and not surprising that adding noise reduces the complexity of the class of networks and results in tighter generalization bounds. However, what is surprising is the fact that even a negligible amount of noise is enough to enable the noisy analysis. For example if the neural network is implemented on a device with finite precision, then one can set $\sigma=10^{-240}$ and get a generalization bound which is logarithmic in $T$, without sacrificing any accuracy. These discussion are also included in Lines 61-66 and 210-213.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response.
For the ramp loss, the authors mentioned that, this paper only requires the classification loss to be Lipchitz continuous and bounded. Actually it excludes the commonly used cross-entropy loss, hinge loss.
The developed technique is beneficial to the recurrent models, and I'm wondering that the conclusion (e.g., noise helps sample efficiency) can still hold for general architectures? e.g., MLP, CNN, Transformer.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response. Regarding the unbounded loss functions such as cross-entropy loss or hinge loss, we would like to draw the reviewer's attention to an important distinction between two scenarios: (i) where these loss functions are used as "surrogate losses" but the goal is bounding the classification (0-1) loss or (ii) where the actual goal is having a small expected value for these unbounded losses.
The first scenario is common in the literature, and in fact our generalization bound remains valid in this case. To see this, note that the ramp loss is upper bounded by these surrogate losses. Therefore, a small empirical value of the surrogate loss also implies a small empirical ramp loss. On the other hand, we guarantee that the expected and empirical values of the ramp loss are close to each other. Therefore, we can conclude that the expected error of ramp loss is smaller than or close to the empirical value of the surrogate losses. We also know that the 0-1 loss is always upper bounded by the ramp loss. Therefore, bounding the sample complexity of generalization for ramp loss implies that the expected value of classification loss is not larger than the empirical value of these surrogate losses.
For the second scenario it is impossible to bound the gap between the expected and empirical values of unbounded losses without making extra assumptions about the data distribution or the network, e.g., bounding the weights. Intuitively, this is because even a single mistake on a low probable input point can have a detrimental effect on population loss.
We focused on the ramp loss to be able to demonstrate the gap between the lower and the upper bounds. Otherwise, just like the first scenario, our upper bound can be used to bound the expected 0-1 error when the empirical value of the hinge loss or cross entropy loss is small.
Regarding the second question, we want to emphasize that the main message of our paper is that noise can exponentially reduce the dependency of sample complexity on sequence length in RNNs (even for negligible amounts of noise), which is a surprising result and it is based on a novel technique for noisy recurrent models. If the question is about using MLP, CNN, etc. as the recurring class in the RNN, our result indeed is with respect to MLP (with no bound on the weights) as the recurring block. The extension to CNN, Transformer, etc., is an exiting future direction. If the question concerns other architectures in general, we think that since the noise is beneficial in MLPs and RNNs, it would be a good future direction to apply noisy analysis to other networks as well.
Title: Reply to Reviewer Ei2R | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
ConRad: Image Constrained Radiance Fields for 3D Generation from a Single Image | Accept (poster) | Summary: This paper proposes image constraint neural radiance field, a representation that takes a reference image into account. Such a representation helps the task of 3D reconstruction from single image with the guidance from pretrained diffusion models. Experiments demonstrates the proposed method can help boost the quality of the reconstructed objects.
Strengths: 1. The proposed method is simple and effective.
2. The experiments show the effectiveness of the proposed image constraint neural radiance field.
Weaknesses: 1. For viewpoints other than the reference view, the shapes and images are blurry. The proposed method can only enforce the appearance under the reference view to be realistic, giving limited improvement to views that do not have overlap with the reference view.
2. In line 49, '…can explicitly capture an input image without any training' which is ambiguous. The underlying depth of the input image still requires depth loss to constrain the learning of the radiance field.
3. The proposed methods claims to fully utilize the information from the reference view compared to the previous methods(L55-57). The quality is better than the baselines, but how about the efficiency? Does it take shorter time to reach a good reconstruction result?
4. There are some typos in the paper, e.g., L54 'a' should be removed; Eq.(4) 'o_p' should be bold to align with L172
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have mentioned some of the limitations in Sec 5.5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer mCGc for the positive review and constructive feedback. We clarify the concerns raised by the reviewer here.
> 1. For viewpoints other than the reference view, the shapes and images are blurry.
Compared to the reference view, the renderings do appear less crisp in novel viewpoints. However, this is a limitation of most image to 3D approaches. In fact, we demonstrate in results that compared to existing works, our approach produces higher quality reconstructions even in novel viewpoints (Figure 3 and Table 1).
> 2. In line 49, '…can explicitly capture an input image without any training' which is ambiguous. The underlying depth of the input image still requires depth loss to constrain the learning of the radiance field.
We will rephrase this in the paper to make it clearer. This statement is referring to the rendering of the radiance field from the reference viewpoint. The proposed method can indeed capture an input image *in the reference viewpoint* without any training. The constraints applied on the radiance field are instantaneous and do not require any optimization. For example, in Figure 1, we show a visualization of the constraints applied using the image of a chair. As we show on the right, the rendering from the reference viewpoint is accurately capture without any training.
> 3. The quality is better than the baselines, but how about the efficiency? Does it take shorter time to reach a good reconstruction result?
Intuitively, we hoped to see improvements in efficiency since ConRad captures the reference view accurately. However, in practice we did not observe convincing speed improvements in the number of updates required. Therefore, we do not make this claim in the paper. We keep the number of updates equal to RealFusion which works well for all objects. Both methods take approximately 20 minutes on a single A100 GPU. NeuralLift-360 uses twice the number of updates and takes 1 hour on the same hardware setup. We will add these details to the paper.
> 4. There are some typos in the paper, e.g., L54 'a' should be removed; Eq.(4) 'o_p' should be bold to align with L172
Thank you for pointing this out. We will make these changes.
---
Rebuttal Comment 1.1:
Comment: Thanks for the explanations. The authors have addressed my concerns and I will update my ratings. | Summary: This paper proposes a novel parametrization for NeRF designed to facilitate the task of single image (+ foreground mask) to 3D model generation. The authors modify the volumetric rendering equation of the NeRF volume to include explicit constraints given by the single available view. In particular, by construction, points intersected by rays corresponding to the background of the conditioning image will have their density set to zero and points intersected by rays corresponding to the foreground will have an RGB color equal to the one in the conditioning image (instead of the one encoded in the radiance field). Given this modified NeRF rendering procedure the authors use textual inversion + an SDS loss similar to Dreamfusion to optimize a full 3D model. The proposed method can be used both to generate plausible 3D models of real objects from a single view as well as to generate 3D models from pure text using a text2image model to get the conditioning view. The method is compared to concurrent works and achieves better quality and more crisp 3D models.
Strengths: + The proposed solution is both simple and elegant while achieving convincing results. The modification to the rendering equation guarantees, by construction, that the single input view is going to be respected. This allows the capacity of the NeRF volume to focus on modeling the missing parts and recovering the geometry of the first view (when not using the monocular depth loss).
+ Great presentation, the paper is well written and quite self contained. Most details regarding the implementation of the method are provided as part of the appendix.
+ I appreciated the effort of the authors in Sec. 5.3 Tab. 1 to try to propose a novel metric for evaluating the quality of the inferred 3D representations in terms of their 3D consistency.
Weaknesses: ## Major
I have not found major weaknesses in the work.
## Minor
a. **Scale Ambiguity**: The scale of the 3D reconstructed model is inherently ambiguous due to the unknown depth associated with the first view. This means that depending on how the density of the first view converges the same 3D object could be represented as a big object far away or a small object close to the original camera. Of course this, in turns, implies that moving the camera once the model is fitted will have drastically different effects. This is also recognized by the authors between line 287 and 290. This is an inherent ambiguity of any methods based on a single view but I wonder if a simple regularization to push the density of the NeRF towards the central area of the volume could have helped to standardize the scale of the fitted 3D models.
b. **Possibly entangled text inversion**: to keep faithful details of the object the authors propose to perform textual inversion of a Stable Diffusion model to find a text token corresponding to the appearance of the object they are trying to reconstruct. To do so they apply various augmentation to the single view of the object to generate a training set for textual inversion and directly optimize an input token. Since all the images used for textual inversion are generated from a single seed image the textual inversion token might pick up on specific details of the single view available rather than on the specific object that the authors are trying to reconstruct. An example of unwanted entangle representation could be to pick up on the background of the single view. Randomly cutting and pasting the foreground object into random positions over random backgrounds might be a viable solution to reduce this risk.
c. **Some ad hoc components per experiments**: From the manuscript it seems that the authors used the depth loss and the warm start only for some experiments but not for all of them. This makes evaluating the experimental evaluation a bit more confusing. If the two additional components do not have any negative impact on the final performance also for models which would not need them (as it seems the case from the ablations in Fig. 4) I would suggest just presenting all results with both components switched on for all experiments. If instead in some cases those have a negative effect I would mention it explicitly in the manuscript.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: ## Questions
1. How are you handling the background when training with the SDS? The NeRF being optimized does not have any background, are you adding a random one before feeding the image to Stable Diffusion to compute the SDS?
2. Can you clarify any possible misunderstandings with respect to what I wrote in weakness “b”?
3. The procedure used to initialize the text token described between line 38 and 40 of the appendix it’s not very clear. How do you perform classification using CLIP in this context?
## Typos
* line 54: “the process of a generating” → “the process of generating”
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: No flag for negative societal impact.
Limitations have been discussed in Sec. 5.5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 1JJW for the appreciation of our work, positive review and the constructive discussion. We discuss some of the questions raised by the reviewer here.
### Minor Weaknesses
> 1. **Scale Ambiguity**: ... an inherent ambiguity of any methods based on a single view but I wonder if a simple regularization to push the density of the NeRF towards the central area of the volume could have helped to standardize the scale of the fitted 3D models.
We adopt a slightly different idea to achieve this goal. The density of the radiance field is initialized with a gaussian at the center with a chosen standard deviation. This biases the model towards a specific scale. The same idea is used in DreamFusion[1], RealFusion[14] and NeualLift-360 [15].
> 2. **Possibly entangled text inversion**: ... An example of unwanted entangle representation could be to pick up on the background of the single view. Randomly cutting and pasting the foreground object into random positions over random backgrounds might be a viable solution to reduce this risk.
This is definitely a possible issue when performing textual inversion on a single image. Using segmented foreground with random backgrounds might help alleviate this issue to some extent. Our approach of carefully initializing the learned special token also addresses this issue to some extent.
Specifically, as briefly discussed in the supplementary material, we first classify the reference image using CLIP. This can be done by finding the noun $n$ in the CLIP vocabulary that minimizes the distance between the CLIP text embedding of "A photo of <$n$>" and the CLIP image embedding. The embedding of the special textual inversion token is then be initialized with the text embedding of $n$. This ensures that the token focuses on the object in the image. For example, for an image of a red cat, the textual inversion token would be initialized with "cat". Textual inversion would then ideally update the embedding to capture "red cat". However, it is still possible but less likely that the embedding would drastically drift to capture background elements.
> 3. **Some ad hoc components per experiments**: From the manuscript it seems that the authors used the depth loss and the warm start only for some experiments but not for all of them. ... I would suggest just presenting all results with both components switched on for all experiments.
We do keep both components turned on except for the ablation experiments. This was possibly not communicated properly in the text. We will rephrase the text to make this clearer.
---
### Questions
> How are you handling the background when training with the SDS?
For every update, we randomly sample a RGB color and create a uniform background with this pixel value. Thank you for pointing this out. We will add this detail to the implementation details.
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Thank you for the additional details.
My doubt have been cleared.
I would suggest to the authors to add the additional details about density initialization and textual inversion in the main paper or supplementary (in case they are not there). | Summary: This work proposes an approach for 3D generation from a single input image. Given an input image, a monocular depth estimate and an estimated instance mask, an image constrained radiance field is optimized following the score distillation approach from DreamFusion. They key idea is to enforce a hard constraint on the radiance field that guarantees that the input image can be reconstructed exactly whereas existing methods only encourage this with a reconstruction loss.
Strengths: The paper is easy to follow and well-written. Both, qualitative and quantitative analysis indicate that the proposed method outperforms the existing baselines.
In general, inferring 3D representations from single input images is an interesting and challenging task with high relevance to the community. The idea of integrating the input image as a hard constraint in optimization is intuitive and might be interesting to the community.
Weaknesses: My main concern is that the central claim of the paper, that integrating the image as a hard constraint over using a reconstruction loss makes the training more robust and that “The proposed ConRad representation significantly simplifies the process of a generating a consistent 3D model for the input image” (L.55), requires more experimental support. While the supplementary video shows a few failure cases for the baselines these results could be hand-picked. Importantly, it is also not clear if the stable training indeed results from using the proposed image conditioned radiance fields. For this, I would have liked to see an ablation study where the training pipeline is exactly the same except that once an image conditioned radiance field is used (hard constraint) and once a reconstruction loss is used (soft constraint), also ablating different strengths/weights of the reconstruction loss.
Using pearson correlation as a depth loss for monocular depth was already proposed in [1] which is not cited. Further, [1] is missing from the baseline comparison and while there is no code available, a qualitative comparison could be performed similarly to the other baselines, i.e. using results from the original paper.
[1] Deng et al, “NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors”, CVPR2023
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please also see Weaknesses.
L.262 mentions that NeuralLift360 uses mask supervision. However, this was not clear to me from the original NeuralLift360 paper. Could you please explain again how NeuralLift360 uses foreground masks and provide a pointer where this is stated in the original paper?
Your approach seems to achieve a higher image fidelity than the baselines. Is this only a result of more stable training or which component, e.g. compared to NeuralLift360, enables the higher image fidelity?
Minor:
L.277 “improved metrics”, it is not shown that the metrics improve over existing metrics, so I suggest removing “improved”
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: In general, the limitations and the broader impact were adequately discussed. From the supplementary video it looks like sometimes objects are not fully opaque (bird). This should be added to the limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer ZVcc for the constructive feedback and suggestions. We address the reviewer's comments here with additional experiments and discussion.
> 1. My main concern is that the central claim of the paper, that integrating the image as a hard constraint over using a reconstruction loss makes the training more robust and that “The proposed ConRad representation significantly simplifies the process of a generating a consistent 3D model for the input image” (L.55), requires more experimental support.
Thank you for raising the concern that this claim does not seem sufficiently supported. Here we will provide additional discussion and results, and re-iterate some of our intuitions for this claim.
- Our proposed approach ConRad removes any need for additional reconstruction loss objectives. This in turn eliminates any associated hyperparameters. Furthermore, existing works (like RealFusion and NeuralLift-360) perform an "alternating optimization" strategy to optimize two separate objectives which is generally an unstable optimization algorithm (except in special cases). Therefore, we believe that removal of these components intuitively leads to a *simplification* of the process.
- Nevertheless, we agree that additional experimental evidence could help support this claim further. Based on the reviewer's suggestion, In Table 1 of the rebuttal PDF, we compare ConRad to models constructed by learning the representation using the reconstruction losses (for RGB and foreground mask). We investigate different strengths of reconstruction losses $\lambda$ and experiment with 20 ShapeNet objects. We observe that on most objects (18 out of 20), ConRad produces better reconstructions based on the "All View $d_{oracle}$" metric. We also observe the same result across all six metrics but omit this due to space constraints.
> 2. Using pearson correlation as a depth loss for monocular depth was already proposed in [1] which is not cited. Further, [1] is missing from the baseline comparison and while there is no code available, a qualitative comparison could be performed similarly to the other baselines, i.e. using results from the original paper.
[1] Deng et al, “NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors”, CVPR2023
Thank you for pointing us to this reference. We will include this citation. To the best of our knowledge, this was unpublished work at the time of submission. However, we will add comparison to this work in the final version.
> 3. L.262 mentions that NeuralLift360 uses mask supervision. However, this was not clear to me from the original NeuralLift360 paper. Could you please explain again how NeuralLift360 uses foreground masks and provide a pointer where this is stated in the original paper?
This detail is not mentioned in the NeuralLift-360 paper. Please refer here for the implementation detail:
- https://github.com/VITA-Group/NeuralLift-360#data-preparation
- https://github.com/VITA-Group/NeuralLift-360/blob/main/nerf/utils_neurallift.py#L377-L383
- https://github.com/VITA-Group/NeuralLift-360/blob/main/nerf/utils_neurallift.py#L620-L628
> 4. Your approach seems to achieve a higher image fidelity than the baselines. Is this only a result of more stable training or which component, e.g. compared to NeuralLift360, enables the higher image fidelity?
We believe that the higher fidelity can be attributed to the stability achieved by capturing the reference view accurately using ConRad. There are no other major differences compared to NeuralLift-360 and RealFusion (except the addition of depth loss).
> 5. Minor: L.277 “improved metrics”, it is not shown that the metrics improve over existing metrics, so I suggest removing “improved”.
We will update this in the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and for providing the ablation study on the reconstruction loss. Since all my concerns were adequately addressed, I have updated my rating accordingly. | Summary: This paper introduces Image Constrained Radiance Fields (ConRad), a novel 3D representation that constrains initial radiance fields to a reference view image without requiring training. ConRad is adept at accurately modeling the input image in one reference view and is effectively integrated with Dreamfusion-style training to convert single images to 3D shapes. The results illustrate that ConRad’s 3D reconstructions are of high quality and closely resemble the original input.
Strengths: 1. The paper introduces an innovative image-conditioned radiance field, ConRad, which is capable of accurately modeling a reference view without the need for training, and significantly enhances the optimization of single image-conditioned NeRFs. Despite its simplicity, ConRad produces remarkable results in converting single images to 3D.
2. The authors adeptly tackle several technical challenges by employing depth loss and a warm start strategy. ConRad’s effectiveness is convincingly demonstrated through both qualitative and quantitative results.
3. The ablation studies presented in Figure 4 are methodologically sound and illustrate the efficacy of each module proposed. Additionally, the authors provide insightful analyses.
4. The paper is well-structured, with clear and easily comprehensible presentation.
Weaknesses: 1. It would be beneficial for the authors to include information on the training time required for ConRad to convert a single image to 3D, and compare this with the training times of original Dreamfusion or Dreambooth3D. It is pertinent to understand if the ConRad representation contributes to accelerating the convergence speed of NeRF optimization.
2. The paper should explore the impact of varying hyperparameters within ConRad. Additionally, as Instant-NGP is utilized to represent the 3D scene, it would be valuable for the authors to investigate the influence of different NeRF representations.
3. ConRad still exhibits issues such as color saturation, akin to those found in Dreamfusion-style optimization. It is recommended that the authors consider incorporating recent advancements in Dreamfusion, such as ProlificDreamer, to further enhance ConRad’s performance.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: see the strengths and weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer GS6t for their appreciation of our work, positive review and valuable suggestions to improve our paper.
> 1. It would be beneficial for the authors to include information on the training time required for ConRad to convert a single image to 3D, and compare this with the training times of original Dreamfusion or Dreambooth3D.
We hoped that ConRad would lead to faster convergence since the reference viewpoint is instantaneously captured. However, in practice, we did not observe convincing improvements in speed across all objects compared to the baseline (RealFusion). We keep the number of updates equal to RealFusion which works well for all objects. Both methods take approximately 20 minutes on a single A100 GPU. NeuralLift-360 uses twice the number of updates and takes 1 hour on the same hardware setup. We will add these details to the paper.
> 2. explore the impact of varying hyperparameters within ConRad ... it would be valuable for the authors to investigate the influence of different NeRF representations.
Thank you for the suggestion. We will report these in the final version. Switching the representation from Instant-NGP to other NeRF representations would require additional experimentation which precludes us from reporting it here due to time and resource limitations.
> 3. ConRad still exhibits issues such as color saturation, akin to those found in Dreamfusion-style optimization. It is recommended that the authors consider incorporating recent advancements in Dreamfusion, such as ProlificDreamer, to further enhance ConRad’s performance.
As an active area of research, there have been several recent advancements in this domain. Since ConRad improves the underlying representation, we believe that these advancements can be applied to ConRad. We thank the reviewer for the suggestion. We plan to investigate this and report any improvements in the final version.
---
Rebuttal Comment 1.1:
Comment: The authors' rebuttal addresses my concerns. I will maintain my existing scores. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and feedback. We address individual comments to each reviewer separately.
Please find supporting material attached here as a PDF.
Pdf: /pdf/4a64abd6ad2918e5f7ed89790b2e5bd7853847e0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents ConRad, a new method for reconstructing 3D objects from a single RGB image. At the core of ConRad is a neural radiance field that by design satisfies the reference view constraint. That is, the representation will always render the input RGB image at the reference view. This hard constraint gets rid of the need for training in the reference view, and enables ConRad to be trained in a similar way as DreamFusion. Experiments show that ConRad can produce 3D reconstructions more faithful to the input and produce more consistent 3D models than RealFusion and NeuralLift-360.
Strengths: - The idea of introducing reference-view reconstruction as a hard constraint is interesting and novel. It by design projects the image color to the 3D radiance field and gets rid of the need for training in the reference view. This design also serves as a useful prior during training and better preserves the texture and properties of the input image. I believe this design would be interesting to the community.
- The qualitative comparison shows that ConRad performs much better than RealFusion and NeuralLift-360 in reference view reconstruction and multiview consistency. The quantitative results of ConRad is also significantly better.
- The writing is clear and easy to follow.
Weaknesses: - Most parts of the pipeline are based on previous works, such as the texture inversion, multiview SDS loss, and depth guidance, which makes the technical contribution not strong.
- At line 174, it is mentioned that $\eta$ is set to 0.1 and "the visibility depth for each pixel is a point on the ray beyond which the contribution of color is minimal (less than 10%)." I find this design and explanation not convincing and maybe wrong.
In my understanding, when $\eta = 0.1$, it means the ray from the camera to the visibility depth has only contributed 10% color, and the ray after the visibility depth will contribute the remaining 90% color, which is the opposite of the explanation in the paper. Therefore, I think it makes more sense to set $\eta$ to 0.9. Authors should clarify this and provide an ablation study on the choice of $\eta$.
- In Fig. 3, the authors should show the qualitative results of all baselines (RealFusion and NeuralList-360) for each example. There is enough horizontal space to do this. Besides, there are not enough qualitative comparisons in the current submission. Authors should provide more qualitative comparisons in the supplementary materials.
- For quantitative study, why not report the scores under the same setting as RealFusion for a fair comparison?
- From the video, it can be seen that the 3D object is semi-transparent in many cases.
- At line 237, it is mentioned that "Computation of visibility depth also does not significantly increase GPU memory consumption since we do not compute its gradients." However, since the method uses a depth loss, I believe the gradient need to be back-propagated through depth anyway. Then this argument does not make sense.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Authors may respond to the weaknesses mentioned above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer yDN8 for the appreciation of our work, overall positive review and constructive feedback. We clarify the questions raised by the reviewer here.
> 1. Line 174, ... I think it makes more sense to set $\eta$ to 0.9.
This is an error in Equation (3). The equation should be
$$ 1 - \frac{\int^t_{0} T(t) \sigma( r^{(i,j)}_p(t) )}{\int T(s) \sigma( r^{(i,j)}_p(s) ) } = \eta $$
Thank you for pointing this out. We will fix this in the paper. We observed in experiments that the effect of $\eta$ varies with the volume and complexity of the object. However, setting it to 0.1 generally works for all objects. We will present additional visualizations in the final version with varying $\eta$.
> 2. In Fig. 3, the authors should show the qualitative results of all baselines
We have added more qualitative visualizations of the baselines in the attached rebuttal PDF. We observe that ConRad can consistently produce higher quality reconstructions (compare to the visualizations in the main paper). For fair comparisons to the baseline works, in the main paper, we chose to compare on the images chosen by the respective authors in Figure 3. We will add these additional visualizations to the supplementary material.
> 3. why not report the scores under the same setting as RealFusion for a fair comparison?
We faced several issues with the metric proposed in RealFusion:
- First, the authors did not release an implementation of this metric. This required us to attempt to reproduce the results presented in RealFusion.
- The metric presented in RealFusion requires conversion of the radiance field to a mesh using the marching cubes algorithm, followed by Iterated Closest Point (ICP) algorithm to match the estimated mesh to the ground truth mesh. Both these algorithms are known to be sensitive to the hyperparameters and would require manual tuning per object. This makes the metric difficult to reproduce.
In contrast, we took inspiration from the metric proposed in NeuralLift-360. The "All View $d_ref$" metric presented in our work is the same as the metric used in NeuralLift-360. We also report additional metrics ($d_{all}$, $d_{oracle}) that build on top of this idea while maintaining simplicity. We will also release code for this metric along with the final version of the paper.
> 4. From the video, it can be seen that the 3D object is semi-transparent in many cases.
This is a common issue faced by image to 3D approaches that rely on NERF representations. Similar results can be observed in RealFusion and NeuralLift-360. We will include this in our discussion on limitations. One potential solution is to use mesh-based representations. However, this exploration would be beyond the scope of our work and would require additional research.
> 5. At line 237, it is mentioned that "Computation of visibility depth also does not significantly increase GPU memory consumption since we do not compute its gradients." However, since the method uses a depth loss, I believe the gradient need to be back-propagated through depth anyway.
This is referring to the computation of the "*visibility depth*" defined in Equation (3). This is not the same as the depth estimate used in the computation of the depth loss. It is true that the depth loss requires back propagation of gradients. This statement is alluding to the fact that the computation of *visibility depth* does not add additional memory consumption on top of that.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the clarification and additional results. Most of my concerns are resolved, so I update my rating to weak accept. Please include these revisions in the revised version. | null | null | null | null | null | null |
AdaPlanner: Adaptive Planning from Feedback with Language Models | Accept (poster) | Summary: LLMs have shown success as autonomous agents that make and execute plans in sequential decision problems. Existing methods either make open-loop plans, limiting adaptability to the environment, or closed-loop plans. Existing closed-loop methods, apart from DEPS, keep the plan static but simply modify immediate actions according to environment feedback, leading to potentially sub-optimal policies. The authors introduce AdaPlanner, a closed-loop LLM planner that additionally allows for *plan* refinement during the episode. The success of their method not only relies on this, but additionally code-style prompts and a skill-discovery mechanism for few-shot exemplars. AdaPlanner outperforms existing works while relying on far fewer demonstration examples from similar tasks.
Strengths: - Empirically the authors show strong results with respect to sample efficiency and asymptotic performance.
- Many ablations make it easy to understand which components of the model lead to overall success.
- Conceptually simple approach.
Weaknesses: - In the evaluation section, the baselines are glossed over. This makes it hard to comprehend the distinction between their approach and the baselines.
- I’d recommend adding some of the Appendix descriptions to the evaluation section, and potentially referencing Table 1 more often.
- The authors use the term ‘hallucination’ a lot but do not define it.
- The authors discuss in- and out-of- plan refiners a lot before providing intuitive examples for when either would be necessary. Could the authors provide more examples earlier on in the paper?
- DEPS appears to be a relevant baseline. Could the authors include it or at least delve deeper into its limitations and why it is not appropriate?
- It appears that the largest contributor to the success of AdaPlanner, over existing approaches, is code style prompts and skill prompts. Wouldn’t it be worthwhile to apply those modifications to existing approaches, like Reflextion (Fig 4), and contrast?
- AdaPlanner prompts the LLM to correct any syntax errors. How important is this? Would be nice to include this ablation.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Line 80, could you define the output of pi, in the same way that you did for the planner?
- Line 81, shouldn’t it be P_t rather than P_{t - 1}?
- Lines 114 - 144 I think you’ve repeated the sentence twice.
- Line 216, what are the 6 task types?
- Line 132, how is N chosen and what’s its effect on performance?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: AdaPlanner still requires demonstrations for learning. Would be worthwhile comparing with RL agents trained directly on the task, without any expert demonstrations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Weakness 1]**
We appreciate the reviewer's comments. Due to the space limitation, we postponed the detailed introduction of the baseline to the Appendix 8.2. We evaluated AdaPlanner against a selection of representative baselines, both training-based and LLM-based.
- In ALFWorld, we compared AdaPlanner to
- BUTLER: an imitation-learning baseline for ALFWorld[1];
- ReAct [9] and Reflexion [10]: two prominent LLM-based methods classified as *Implicit Closed-Loop Methods*.
- In MiniWoB++, we evaluated AdaPlanner against
- CC-Net [11], WGE [12]: two training-based methods integrating supervised learning with reinforcement learning;
- WebN-T5-3B [13]: a fine-tuned language model;
- RCI [2], a *Implicit Closed-Loop Method* for solving MiniWoB++ tasks.
These methods reflect different perspectives (training-based and LLM-based) and offer a good reference frame for assessing how AdaPlanner enhances adaptive and sample-efficient planning.
We will update our paper to show the distinction of these baseline and build a more detailed connection to Table 1.
**[Weakness 2]**
In the scope of LLM, “hallucination” commonly refers to *the produced content that is nonsensical or unfaithful to certain sources* [14,15,16,17]. When applied to the context of LLM for decision-making tasks, “hallucination” is specifically defined as two cases: 1) *the generated actions that are inadmissible in the environment*, or 2) *the unfaithful presumptions made by LLM about the environment settings*.
An example of the first case is provided in *ReAct and Reflexion trajectory of Case 1*, Appendix 8.4.1, where the agent generates illegal actions; the one for the second case is in the *AdaPlanner Trajectory of Case 2*, Appendix 8.4.2, where the agent unfoundedly presumes the location of the watch.
**[Weakness 3]**
We appreciate the reviewer's comment on clarity. We will add an intuitive example from ALFWorld before the detailed introduction to these two refiners. We will also present Figure 2 earlier in the paper for better illustration.
**[Weakness 4]**
As shown in Table 1, DEPS relies on a training-based selector and requires additional data collection. It only utilizes past failures to refine its plans. In contrast, AdaPlanner can leverage both past successes and failures. Moreover, unlike AdaPlanner’s in-plan refinement, DEPS does not extract key information from observations, which may make it less adaptive than AdaPlanner.
DEPS was constructed around the OpenAI Codex API, which has been deprecated. DEPS is also primarily designed for Minecraft and poses difficulties for transfer to ALFWorld and MiniWoB++.
**[Weakness 5]**
ReAct and Reflexion are notable for their use of natural language-based open-loop/in-plan reasoning and planning. The code interface and skill discovery are two of our major contributions over these methods. Although these two techniques are generally compatible with the existing methods, once these features were incorporated into the existing approaches, the modified variants would convert to new methods towards our method, which no longer accurately reflect the performance of the original methods as described in the papers. This would diminish the significance and novelty of the proposed AdaPlanner with our own techniques.
**[Weakness 6]**
We added an ablation study on this component as follows:
| Environment | Baseline | w/o Code Check |
|-----------------------------------|---------------|----------------|
| ALFWorld | 80.60 | 79.85 |
| MiniWoB++ | 92.87 | 91.92 |
The table above shows the success rate (%) ablating the code check. As shown, the code check contributes to enhancing the overall performance. This improvement is relatively marginal, because of the capability of the LLMs (GPT-3 and GPT-3.5) to generate code almost free of syntax errors. Upon further investigation, we found that only 1.49% of the code generated for ALFWorld contained syntax errors, thus making the code check not the primary contribution to the performance improvement. Instead, as detailed in Figure 4, the main contributions of our study are the proposed closed-loop structure, code-styled prompting, and skill discovery. These mechanisms collaboratively foster adaptive and sample-efficient decision-making.
**[Question 1]**
$\pi(\cdot|g, c_t, P_t)$ generates the action at $t+1$ step conditioned on a given plan $P_t$.
**[Question 2]**
It should be $P_{t-1}$ as we were discussing the $t$-th step.
**[Question 3]**
Thank you for this comment. We will revise the sentence in our updated paper.
**[Question 4]**
The 6 task types are ```Pick, Clean, Heat, Cool, Examine, Pick two```. We provided details about these tasks in Appendix 8.1 (line 407-415).
**[Question 5]**
The LLM agent automatically determines the value of $N$ during plan generation. It's not a manually set parameter but rather task-specific.
**[Limitations]**
The skill discovery mechanism can greatly alleviate the need for demonstration. In the ablation studies (Figure 4d), we applied a zero-shot prompting for MiniWoB++ by *omitting any demonstrations*. AdaPlanner then successfully finds feasible solutions over 21 tasks and enhances the overall success rate by 15%. As indicated in Figure 3, Incorporating skill discovery can effectively reduce the number of demonstrations required to achieve satisfactory performance.
Generally, RL agents using expert demonstrations outperform those without them. In our evaluation, we compared AdaPlanner with several RL-refined imitators, such as BUTLER, CC-Net, and WGE, which employ 100k, 23k, and 10 demonstrations per task, respectively. Comparing AdaPlanner with these baselines underscores AdaPlanner's superior performance and suggests its potential superiority over baselines that do not use demonstrations.
Our future work will focus on enhancing AdaPlanner to perform well even without demonstrations.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I thank the authors for their detailed rebuttal. I appreciate the additional ablation and the comparisons made with methods that do not leverage demonstrations. In line with my review, I believe this is a good paper and will keep my score as is.
---
Reply to Comment 1.1.1:
Comment: Thanks for reading our rebuttal and providing valuable feedback. We will incorporate these additional results and discussions into the updated version of our paper. | Summary: Briefly summarize the paper and its contributions. This is not the place to critique the paper; the authors should generally agree with a well-written summary.
The paper proposes AdaPlanner, an LLM-based adaptive planner for text-based sequential decision-making tasks. The planner is adaptive in the sense that it can refine the generated plan/policy based on feedback.
The contributions made in this paper include the following
1. interacting with the environment with LLM in the loop
2. a code-style prompt is engineered for LLMs to output a policy
3. refining the LLM policy for the current task based on feedback
4. prompt tuning for new tasks based on previous interaction (termed skill discovery)
The proposed AdaPlanner is evaluated on two text-based sequential decision-making environments ALFWorld and MiniWoB++. Their experiments indicate that with feedback, LLMs can adapt the plan.
Strengths:
* The paper is well written.
* The paper focuses on extremely relevant and signifcant problems.
Weaknesses: * I find the paper lacks significant details. Please see the next section for the list of questions.
* The paper employs sloppy mathematical notations.
* The paper lacks the rigor of scientific evaluation.
* Paper misses all references to LLM-based approaches for planning with PDDL. The one that I find most relevant for code generation is "Generalized Planning in PDDL Domains with Pretrained Large Language Models, Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Pack Kaelbling, Michael Katz”
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors:
**Major**
1. How is the programmatic response from LLM converted to action responses? Did the conversion require manual intervention? For instance, Figure 2 has an indentation error which would result in a wrong plan. Were such indentation errors evaluated manually? Can authors provide a list of errors made by LLMs?
1. In line 167, what does an alignment between ‘anticipated plan’ and environment mean? How does the AdaPlanner observe the alignment?
1. Can authors provide details about the size of the task used in the prompt (for samples) vs the size of the task that was successfully solved by AdaPlanner? To establish the claim of sample efficiency, it is important to understand if the planner is able to efficiently plan for tasks that are significantly different from the prompts.
1. The X-axis in Figure 3 indicates `# Samples per task`. Is this the number of samples provided for each trajectory? Or sum?
1. What was the length of plans or length of trajectories generated by AdaPlanner vs other approaches? To claim the effectiveness of the AdaPlanner, it is important to compare the length of successful trajectories.
1. For skill discovery, how is the solution converted to the skill? How are skills represented? How large is the skill memory? Were the discovered skills included in the count of samples used for training as they are training samples for the next set of trajectories?
1. It is not clear how skills are filtered and what criteria are used for the evaluation and ranking of skills.
1. What is the connection between skill discovery and prompt tuning?
1. The success rate of "With SD" in Figure 4d looks significantly reduced from Figure 4a. Were different settings used for theses experiments?
1. At various places, the paper mentions "environment feedback". In my opinion, this is a misnomer. The feedback is not from the environment. The environment just provides the next observation, the feedback is generated by the agent itself. And the use of observation to refine a plan or next action is quite standard practice in RL. I would highly recommend dropping the term feedback from the title.
1. The use of term plan and policy is a little confusing. A plan is a sequence of actions. A policy is a mapping from states to actions. By this definition, the `solution()` function is as a policy. In preliminaries, the planning policy ($\rho$) is conditioned on a previous plan $P_t$. However, the appendix describes the refinement prompt using the assertion error (instead of `solution()`). Isn't the assertion error providing information about the policy (the `solution()` function)? So I am confused by the terminologies. Is the $\rho$ refined conditioned on the policy or the plan? The usage of these terms is also confusing in the Preliminary section. Request authors to precisely define the mathematical notations and highlight what they represent in the examples.
**Minor**
12. In line 387, there are extra curly braces.
12. The notation $\rho$ is used in line 73 but introduced much later.
12. As the context $c_t$ is defined as a sequence of action and observations from time step $0$ to $t$, it is not clear what $c_{>t}$ means (in line 116).
12. Open-Loop system in Figure 1 should have an arrow going from env to planner with $o_1$.
12. Statement in Line 144 "To generate a plan .." looks like a repetition of Line 141 "To generate an initial plan..."
12. In line 116, if $h_t$ is obtained from $c_t$ then would it not be captured in $c_{>t}$? An example of $h_t$ would help better understand the proposed update.
12. In line 73, as $\rho$ is defined using $\Delta(A^{T})$. But the length $T$ is not fixed.
12. In line 73 $\rho$ is defined where a plan is conditioned only on observation and goal. However, later it is conditioned on the context, plan, and goal.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations:
* The evaluations are restricted to text-based sequential decision-making problems and task where the inadmissible actions do not cause drastic changes in the environment. On the contrary, inadmissible actions are like no-ops. Further, the paper does not present analysis of plan length. Hence, the analysis is limited to zero risk environments.
* The claim made in the abstract about skill discovery mechanism enabling agent to plan with fewer task demonstration is not substantiated in the evaluations. Evaluation in Fig. 4d only established improvement in success rate, not sample efficiency.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Due to the 6000 characters limit, we only address the important questions. For detailed clarification, we will provide them during the discussion period.
**[Weakness 1]** Due to the page limit, we primarily discussed the main components of AdaPlanner: closed-loop structure, code-styled prompting, and skill discovery in the main body. We have included additional details in the supplementary section to provide a more comprehensive understanding.
In response to the reviewer's list of questions regarding details and notations, we will address each of them separately in the following section. For some of the questions, we can incorporate clarifications in the updated version of our paper.
**[Weakness 2]** In Section 2, we have provided a thorough explanation of the mathematical notations for understanding our method. We have defined planning policies for both open- and closed-loop systems in detail.
For the comment "sloppy mathematical notations," we would appreciate it if the reviewer could provide specific examples. This will allow us to address the concern accurately and make any necessary revisions.
**[Weakness 3]** We strongly disagree with the statement that "the paper lacks the rigor of scientific evaluation". Our evaluation is rigorous in the following aspects:
- Benchmarks: We provided thorough evaluations spanning two representative and widely-accepted decision-making environments;
- Baselines: We've contrasted AdaPlanner with prevailing methods and showed our approach outperforms existing methods in success rate and sample efficiency.
- Ablation Studies: We validated contribution of each component within AdaPlanner.
- Reproducibility: We've included extensive technical details to ensure strong reproducibility.
We firmly believe our research upholds rigorous scientific evaluation standards.
**[Weakness 4]** Most of the existing work along this line of research, including [4, 5, 6] is still open-loop with PDDL. The only exception is [3], as the reviewer suggested. But it was impossible for us to cite in our submission because the paper was uploaded to ArXiv on May 18, 2023 --- after the submission deadline.
Meanwhile, there are significant differences between [3] and our work. In [3], the plan formulated on the training set lacks task-specific refinement during the execution of the evaluation task. In contrast, our AdaPlanner dynamically refines the plan and adapts to various feedback.
Our mechanisms in AdaPlanner could be applied to PDDL, but this extension is out of the current paper's scope and could be explored in the future.
**[Q1.1]** As described in line 161-164, we formulated an environment interface that 1) grounds the generated actions from the solution() function to the admissible actions in the environment, and 2) routes the observation from the environment back to the code as a return value.
**[Q1.2]** No. The interface described above automatically carries out this conversion without manual intervention.
**[Q1.3]** Thanks for pointing out the indentation problem in Figure 2. We found that it was a typo made by the authors when making this figure, not the LLM. We will correct this typo in the updated version.
**[Q2]** The alignment means the execution of the plan proceeds as expected and no assertion is triggered. This is consistent with our definition of in-plan feedback introduced in line 44-46. Note that there is no in-plan alignment, in-plan refinement is only used for extracting key information from feedback.
**[Q3]** We added evaluation results in MiniWoB++ as follows:
| Task group | With Feedback | No Feedback | All |
|-------------------------------------------------|----------------------------------|---------------------------------|---------------------------|
| # Tasks solved | 410.00 | 2,050.84 | 2,460.84 |
| # Samples in the prompt | 13 | 25 | 38 |
| # Sample / # solved tasks (%) | 3.17 | 1.22 | 1.54 |
In MiniWoB++, the ratio # Sample / # solved tasks decreases to approximately 3%. This discussion and Figure 3 jointly show AdaPlanner’s sample efficiency.
**[Q4]** ``# Samples per task`` refers to the number of expert demonstrations provided for each task type.
**[Q5]** The comparison of the average trajectory lengths in ALFWorld is presented as follows:
| Method | Trajectory length (step) |
|:----------:|:---------------:|
| ReAct | 25.81 |
| Reflexion | 18.90 |
| AdaPlanner | 15.60 |
AdaPlanner generally requires fewer steps to complete the tasks compared to ReAct and Reflexion.
**[Limitation 1]** We conducted an analysis of the trajectory length, detailed in our response to *Question 5*, which underscores the effectiveness of AdaPlanner.
The generalization to non-zero-risk environments for LLM agents is an interesting future direction. However, this is beyond the scope of our paper, and there is no widely-accepted benchmark for these tasks yet.
While we have focused on text-based environments, our method is not limited to them. For example, integrating vision-language models like CLIP could potentially allow AdaPlanner to interact with more visually complex environments.
**[Limitation 2]** We added ablation studies in MiniWoB++. The method with skill discovery requires only 15 samples to outperform the variant without skill discovery, even though the latter used twice as many samples. It is evident that skill discovery enhances sample efficiency.
| \# samples | 38 | 30 | 20 | 15 | 0 |
| --------------------------- | ----- | ----- | ----- | ----- | ----- |
| With SD (%) | 92.87 | 84.06 | 79.17 | 75.17 | 60.38 |
| Without SD (%) | 82.40 | 73.58 | 68.70 | 64.70 | 45.47 |
---
Rebuttal Comment 1.1:
Comment: **Please check the remaining part of our rebuttal to Reviewer oy34 as follows:**
**[Q1.4]** We summarized the errors that have occurred through the evaluation in ALFWorld as follows:
| Type | Description |
| ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Indentation error | Occasionally, the code may contain indentation errors. This error can be addressed by the Code Check. |
| Reference error | The ```start_from``` may be misconfigured and the last breakpoint are not properly loaded. This error can be mitigated by out-of-plan refinement. |
| Incomplete code | Due to the token limit, generated code might sometimes be truncated. We can adopt newer models that support extended context lengths, such as gpt-3.5-turbo-16k. |
**[Q6.1/6.2]** We use each solution as a demonstration in the prompt to solve similar tasks. If including the solution boosts the success rate, we will keep the solution in the skill memory and add it to the prompt for future task-solving.
**[Q6.3]** The skill memory can potentially be large, depending on the task complexity. The automatic adjustment of the skill memory's size is an important feature that we will investigate in the future.
**[Q6.4]** AdaPlanner is purely prompting-based and there is no training process involved. We add the discovered skills to the prompts. The skills are actively discovered by the agent, thus not included in the count of expert demonstrations.
**[Q7]** As illustrated in line 209-213, we assess the effect of using each discovered skill as the demonstration in the prompt to solve similar tasks. If adding a candidate solution boosts the success rate on these tasks, it's added to the memory of discovered skills; if not, it's discarded.
**[Q8]** The successful skills discovered by AdaPlanner will be added to the prompt. It could be understood as a method for prompt tuning. However, we emphasize that this procedure is automatically completed by LLM itself during the planning.
**[Q9]** Yes. The detailed settings for these two figures were mentioned in line 227-229, 442-445. In Figure 4a, we adopt the setting of prompted samples as in Table 4. In Figure 4d, we only provide one sample of the simplest task (pick) and use skill discovery to explore skills for the rest five tasks. The difference in the success rate of “with SD” in Figure 4d originates from this setting difference.
**[Q10]** We use “environment feedback” to indicate any outcome provided by the environment, including the observations.
**[Q11]** The assertion error provides information about the plan rather than the policy. For example, in ALFWorld, the error message reports which action within the plan has been executed and the error occurs (e.g., ```Error in [Step X], …```). This information corresponds to $P_{t-1}$ as in the definition of $\rho(P_t|g, c_t, P_{t-1})$ in line 119.
**[Typos] Q12, 13, 14, 15, 16, 19** Thank you for the comments. We will revise these typos and notations in the updated version of our paper.
**[Q17]** The $h_t$ would be included in $c_{>t}$. For example, the agent identifies the target object ```book 1``` from the environment observation ```On drawer 2, you see a book 2, and a keychain 1.``` Here $h_t$ is the identifier information which is then used for future actions.
**[Q18]** We fix $T$ as the step limit of the environments. For example, in ALFWorld, this number is set to 50. | Summary: The paper presents AdaPlanner, a closed-loop planning method that uses a large language model (LLM) to solve tasks in text-based environments. AdaPlanner operates by decomposing a complex task into manageable sub-goals and predicting environmental feedback for each. During execution, it refines its actions based on the feedback received from the environment. AdaPlanner operates solely via prompting, eliminating the need for a dedicated training phase and reducing its computational cost. The paper demonstrates that AdaPlanner consistently outperforms existing baselines, achieving state-of-the-art performance in ALFWorld tasks and MiniWoB++ tasks.
Strengths: - AdaPlanner introduces a novel approach to task-solving in text-based environments using a large language model. It stands out for its closed-loop planning method and its ability to decompose tasks into manageable sub-goals.
- The paper is well-written and clear. The authors have done a good job of explaining complex concepts and methodologies in an understandable manner.
- The work presents a new way of leveraging large language models for task-solving in text-based environments. The results show that AdaPlanner can effectively leverage feedback to refine its plans and enhance its performance.
Weaknesses: - The part about skill discovery is not described very clearly, and I still cannot understand the details of the skill discovery module well.
- The author compared the version without a code interface in the experiment, but it seems that they did not specifically show the prompt after removing the code interface. At the same time, as an ablation experiment, it is also necessary to analyze the effects of specific components in the code interface.
- The phenomenon that GPT-3 performs better than GPT-3.5 is interesting, but it seems that the paper only compares GPT-3 and GPT-3.5 in Alfworld, without conducting the same experiments in MiniWoB++ to further support the conclusion. And the author's hypotheses about this phenomenon (the smaller scale of GPT3.5) lacks specific analysis or literature references to support it.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In the experiment, what is the proportion of in-plan and out-of-plan occurrences? How will this proportion change over time? This should be a necessary indicator for understanding the two refiners.
- On MiniWoB++, will there be better performance from GPT-3 than GPT-3.5?
- Is there still a necessity for AdaPlanner in larger-scale LLMs, such as models like GPT4 with better self-refining capabilities?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: - As mentioned above, this paper still needs more experiments and analysis to further validate the rationality of its methods, as well as the observed phenomena and corresponding hypotheses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Weakness 1]**
Thank you for your valuable feedback. In the skill acquisition stage, AdaPlanner harnesses adaptive closed-loop planning to solve unseen tasks using limited or no demonstrations. The successful solutions found in this trial-and-error process are called the candidate discovered skills and are gathered in a pool.
During the following skill filtering stage, we assess the effect of using each discovered skill from the pool as the demonstration in the prompt to solve similar tasks. If a candidate solution boosts the success rate, it is added to the memory of discovered skills; if not, it is discarded.
**[Weakness 2]**
For the *prompts without code interface (CI)*:
The initial planning phase is prompted as follows:
```
Play as a planner. Make a general plan to finish the task by referring to the paradigm that solved a similar task.
Here is an example:
<example>
You need to generate a new plan by transferring the given paradigm to the new task.
[GIVEN PARADIGM]
<paradigm>
[NEW TASK]
<task>
```
The <example> contains human demonstrations of general plans as follows. Note that the question marks like (id?) indicate the information that will be completed through feedback.
```
Your task is to: heat some egg and put it in diningtable.
1. Search_and_find egg (id?).
2. Take egg (id?) from the place it was found (name? id?).
3. Go_and_heat egg (id?) to microwave 1 to heat the egg (id?).
4. Go_and_put egg (id?) in/on the diningtable 1.
```
For *Ablation study on code interface components*:
We conducted the ablation studies on several components of the code interface, such as the assertion that allows for the out-of-plan refinement and the breakpoint start_from that enables the refine-then-resume mechanism.
| Environment | Baseline | w/o Code Check | w/o assertion|
|:----------------:|:-----------:|:--------------------:|:-------------:|
|ALFWorld |80.60|79.85|75.12|
|MiniWoB++ |91.11|89.78|77.78|
The table above shows the success rate (%) ablating ablating assertion and the breakpoint ```start_from```. In the table, removing any component results in a drop in performance, which signifies the effectiveness of these components in AdaPlanner's design. Notably, when the code interface is fully substituted with natural language, as previously detailed, there is a significant reduction in the performance across both environments. This significant performance drop underscores the essential role of the code interface in AdaPlanner.
**[Weakness 3]**
We evaluated AdaPlanner and RCI on gpt-3.5-turbo and compared the results with those obtained on text-davinci-003:
| Method | With Feedback (9 tasks) | No Feedback (44 tasks) | All (53 tasks) |
|-------------------------------|-------------------------|------------------------|----------------|
| RCI (gpt-3.5-turbo) | 70.89 | 76.36 | 75.43 |
| RCI (text-davinci-003) | 81.56 | 92.68 | 91.00 |
| AdaPlanner (gpt-3.5-turbo) | 75.56 | 78.05 | 77.63 |
| AdaPlanner (text-davinci-003) | 91.11 | 93.22 | 92.87 |
The table above shows the success rate (%) on two subsets of tasks in MiniWoB++. We observe a noticeable performance drop of both methods on gpt-3.5-turbo, which coincides with the results obtained on ALFWorld (Table 2).
We also found a similar hypothesis drawn based on various experiments in [7]. Given that OpenAI does not publicly disclose the technical details or source code for its GPT models, a definitive validation of our hypothesis remains challenging. As indicated in OpenAI documentation, the gpt-3.5-turbo model is primarily optimized for human conversation tasks, which could potentially compromise its performance on tasks such as code generation and reasoning [8].
**[Question 1]**
We investigated the occurrence (average times per task) of both in-plan and out-of-plan refinements with our analysis as shown below:
| Environment | In-plan refinement | Out-of-plan refinement | Proportion (in/out) |
|-----------------------------------|--------------------|-------------|---------------------|
| ALFWorld | 2.83 | 6.40 | 0.44 |
| MiniWoB++ | 0.33 | 1.78 | 0.19 |
The table illustrates that out-of-plan refinement is invoked more frequently in both environments, emphasizing its important role in AdaPlanner's planning process. The average occurrence for both types of refinements varies between ALFWorld and MiniWoB++. This can be attributed to the differences in the average lengths of the trajectories in ALFWorld and MiniWoB++, which stand at 15.60 and 5.40 steps, respectively.
The proportion of in-plan to out-of-plan refinements is task-specific. In tasks with extensive environmental observations in natural language, in-plan refinement might be more prevalent. On the other hand, analyzing this proportion 'over time' within a task may be challenging because neither type of refinement occurs densely enough to make such a temporal analysis meaningful.
We will include these findings and discussions in the updated version.
**[Question 2]**
Yes. Please refer to our response above on *Weakness 3*.
**[Question 3]**
We would like to emphasize that the proposed closed-loop structure and adaptive refinement in AdaPlanner are fundamentally compatible with a wide range of LLMs. AdaPlanner provides such a robust closed-loop planning framework and is supposed to assist larger-scale LLMs in further enhancing their performance.
However, we haven't been granted access to the GPT-4 API at this time. Once it becomes available to us, we will carry out additional evaluations of AdaPlanner on GPT-4 and will update our paper to reflect any significant findings or discussions that arise from this analysis.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. Most of my concerns have been addressed, for now I will maintain my current score and continue to pay attention to other reviews and discussions.
---
Reply to Comment 1.1.1:
Comment: Thank you for offering feedback on our rebuttal. We will update our paper and include the additional results and discussions. | Summary: This paper looks at explicit closed-loop systems with LLMs for adaptive planning utilizing environmental feedback. They showcase better planning performance on ALFWorld and MiniWOB++ environments over existing state-of-the-art works like ReAct and Reflexion.
Strengths: The paper is well written and the experiments are thorough. They present an interesting improvement over the current works like ReAct and Reflexion.
Weaknesses: 1. The kind of tasks in these domains don’t seem to have interaction resolution where there are multiple conflicting causal links from the initial to the goal state which have to be resolved (including negative interactions between subgoals). This could also lead to the human demonstrations helping significantly with the It would be useful to analyze the performance of AdaPlanner specifically in such cases.
2. I think non-ergodic environments could clearly pose danger to such agents. It would be interesting to see how AdaPlanner can perform against ReAct or Reflexion in such environments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Given that the LLM seems to verify the plan to determine its feasibility, what was its efficiency in those assessments? Are there any results pertaining to that?
2. Is there any classification of the tasks with respect to their hardness?
3. For how many of these tasks did the human expert demonstration solve the task?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed some of the limitations. I have provided some limitations in the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Weakness 1]**
Thank you for your insightful comments. We assume “conflicting causal link” is referring to the existence of the non-revocable actions and states.” In MiniWoB++, a significant number of tasks indeed present multiple conflicting causal links. For example, a mistaken click in the task ```login-user-popup``` can directly lead to task failure. As shown in our evaluation (Appendix 8.5), when provided with a single human demonstration for this task, AdaPlanner achieves a success rate of 98%, which is comparable to the state-of-the-art training-based baselines and outperforms all LLM baselines. We can update our paper to include specific case studies that elaborate on such cases.
**[Weakness 2]**
We assume the “non-ergodic environments” refer to environments that present irrevocable actions. Within this context, both ALFWorld and MiniWoB++ are non-ergodic. In ALFWorld, several actions, such as ```clean```, ```cool```, and ```heat``` will pose an irreversible effect on the object. Also, most MiniWoB++ tasks encompass non-ergodic properties as several actions, such as clicking a button, are irreversible. Within this context, AdaPlanner has shown good overall performance in comparison with ReAct, Reflexion, and RCI. Our future work will focus on evaluating these approaches in other representative non-ergodic environments.
**[Question 1]**
In AdaPlanner, the proposed code check and out-of-plan refinement determine and improve the feasibility of the generated plan. The following table shows the success rate (%) of AdaPlanner in ALFWorld and MiniWoB++ (in 9 tasks with feedback), ablating the out-of-plan refinement (assertion component) and code check.
| Environments | Baselines | w/o Code Check | w/o assertion|
|:----------------:|:-----------:|:--------------------:|:---------:|
|ALFWorld (134 tasks)|80.60|79.85|75.12|
|MiniWoB++ (9 tasks)|91.11|89.78|77.78|
As shown in the table, both the code check and out-of-plan refinement contribute to enhancing the success rate, with the latter demonstrating a particularly significant impact on AdaPlanner’s overall performance. This result shows the importance of the proposed assertion mechanism in efficiently validating the current plan's feasibility, thereby facilitating adaptive and robust planning through continuous feedback and refinement.
**[Question 2]**
In ALFWorld, the tasks are categorized based on the length of the solution trajectories.[1]:
| Hardness | Easy | Medium | Medium | Medium | Hard | Hard |
| --------------------------------------------------------- | ----------------------------- | --------------------------------------- | -------------------------------------- | -------------------------------------- | ---------------------------- | --------------------------------------------------- |
| Task | Pick | Clean | Heat | Cool | Examine | Pick two |
| general solution chain | find-> pick -> goto -> put | find->pick->goto->clean->goto->put | find->pick->goto->heat->goto->put | find->pick->goto->cool->goto->put | find->pick->find->examine | find->pick->goto->put->find->pick->goto->put |
| Average length of AdaPlanner solution trajectories (step) | 10.79 | 13.45 | 17.61 | 13.33 | 21 | 20.71 |
In MiniWoB++, we follow the difficulty classification outlined in [2], arranging the 53 tasks into three tiers (easy, medium, and hard) based on the range of success rates. This categorization is illustrated in the following table:
| Hardness | Easy | medium | hard |
|--------------------|:------------:|:-----------:|:-----------:|
| Success rate range | [1, 0.9] | (0.9, 0.6] | (0.6, 0] |
| Example task | click-widget | click-tab-2 | count-shape |
We will update our paper with this specification of hardness classification in the evaluation studies.
**[Question 3]**
In ALFWorld, each of the six task types is provided with a single human expert demonstration, resulting in a total of 6 expert demonstrations across all 134 tasks. In MiniWoB++, we use 38 human demonstrations spread over 53 tasks. All LLM-based approaches we tested, including Adaplanner, employ different task settings when creating these expert demonstrations, which precludes their direct application to solving tasks in the test set. For instance, an expert demonstration in ALFWorld might involve receptacles and objects that do not align with those in the actual test cases.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. Most of my concerns have been addressed. | Rebuttal 1:
Rebuttal: **References**
[1] M. Shridhar, X. Yuan, M.-A. Cote, Y. Bisk, A. Trischler, and M. Hausknecht. ALFWorld: Aligning text and embodied environments for interactive learning. In International Conference on Learning Representations, 2021.
[2] G. Kim, P. Baldi, and S. McAleer. Language models can solve computer tasks. arXiv, page
2303.17491v1, 2023.
[3] Generalized Planning in PDDL Domains with Pretrained Large Language Models, Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Pack Kaelbling, Michael Katz
[4] Liu, B., Jiang, Y., Zhang, X., et al. 2023. LLM+P: Empowering Large Language Models with Optimal Planning Proficiency. arXiv.
[5] Tom Silver, Varun Hariprasad, Reece S Shuttleworth, Nishanth Kumar, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. Pddl planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop, 2022.
[6] Vishal Pallagani, Bharath Muppasani, Keerthiram Murugesan, Francesca Rossi, Lior Horesh, Biplav Srivastava, Francesco Fabiano, and Andrea Loreggia. Plansformer: Generating symbolic plans using transformers. arXiv preprint arXiv:2212.08681, 2022.
[7] Ye, Junjie, et al. "A comprehensive capability analysis of gpt-3 and gpt-3.5 series models." arXiv preprint arXiv:2303.10420 (2023).
[8] OpenAI. OpenAI Documentation: GPT-3.5 models. Retrieved May 16, 2023, from https://platform.openai.com/docs/models/gpt-3-5
[9] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. R. Narasimhan, and Y. Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023.
[10] N. Shinn, B. Labash, and A. Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023.
[11] P. C. Humphreys, D. Raposo, T. Pohlen, G. Thornton, R. Chhaparia, A. Muldal, J. Abramson, P. Georgiev, A. Goldin, A. Santoro, and T. Lillicrap. A data-driven approach for learning to control computers. arXivProceedings of the 39th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022, page 2202.08137v2, 2022.
[12] E. Z. Liu, K. Guu, P. Pasupat, and P. Liang. Reinforcement learning on web interfaces using
workflow-guided exploration. In International Conference on Learning Representations, 2018.
[13] I. Gur, O. Nachum, Y. Miao, M. Safdari, A. Huang, A. Chowdhery, S. Narang, N. Fiedel, and A. Faust. Understanding html with large language models. arXiv, page 2210.03945v1, 2022.
[14] Katja Filippova. 2020. Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings. 864–870.
[15] Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On Faithfulness and Factuality in Abstractive Summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 1906–1919.
[16] Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A Controlled Table-To-Text Generation Dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 1173–1186.
[17] OpenAI. 2023. GPT-4 Technical Report. arXiv preprint arXiv: 2303.08774. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.